• Title/Summary/Keyword: scenarios

Search Result 5,219, Processing Time 0.032 seconds

Forecasting Substitution and Competition among Previous and New products using Choice-based Diffusion Model with Switching Cost: Focusing on Substitution and Competition among Previous and New Fixed Charged Broadcasting Services (전환 비용이 반영된 선택 기반 확산 모형을 통한 신.구 상품간 대체 및 경쟁 예측: 신.구 유료 방송서비스간 대체 및 경쟁 사례를 중심으로)

  • Koh, Dae-Young;Hwang, Jun-Seok;Oh, Hyun-Seok;Lee, Jong-Su
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.223-252
    • /
    • 2008
  • In this study, we attempt to propose a choice-based diffusion model with switching cost, which can be used to forecast the dynamic substitution and competition among previous and new products at both individual-level and aggregate level, especially when market data for new products is insufficient. Additionally, we apply the proposed model to the empirical case of substitution and competition among Analog Cable TV that represents previous fixed charged broadcasting service and Digital Cable TV and Internet Protocol TV (IPTV) that are new ones, verify the validities of our proposed model, and finally derive related empirical implications. For empirical application, we obtained data from survey conducted as follows. Survey was administered by Dongseo Research to 1,000 adults aging from 20 to 60 living in Seoul, Korea, in May of 2007, under the title of 'Demand analysis of next generation fixed interactive broadcasting services'. Conjoint survey modified as follows, was used. First, as the traditional approach in conjoint analysis, we extracted 16 hypothetical alternative cards from the orthogonal design using important attributes and levels of next generation interactive broadcasting services which were determined by previous literature review and experts' comments. Again, we divided 16 conjoint cards into 4 groups, and thus composed 4 choice sets with 4 alternatives each. Therefore, each respondent faces 4 different hypothetical choice situations. In addition to this, we added two ways of modification. First, we asked the respondents to include the status-quo broadcasting services they subscribe to, as another alternative in each choice set. As a result, respondents choose the most preferred alternative among 5 alternatives consisting of 1 alternative with current subscription and 4 hypothetical alternatives in 4 choice sets. Modification of traditional conjoint survey in this way enabled us to estimate the factors related to switching cost or switching threshold in addition to the effects of attributes. Also, by using both revealed preference data(1 alternative with current subscription) and stated preference data (4 hypothetical alternatives), additional advantages in terms of the estimation properties and more conservative and realistic forecast, can be achieved. Second, we asked the respondents to choose the most preferred alternative while considering their expected adoption timing or switching timing. Respondents are asked to report their expected adoption or switching timing among 14 half-year points after the introduction of next generation broadcasting services. As a result, for each respondent, 14 observations with 5 alternatives for each period, are obtained, which results in panel-type data. Finally, this panel-type data consisting of $4{\ast}14{\ast}1000=56000$observations is used for estimation of the individual-level consumer adoption model. From the results obtained by empirical application, in case of forecasting the demand of new products without considering existence of previous product(s) and(or) switching cost factors, it is found that overestimated speed of diffusion at introductory stage or distorted predictions can be obtained, and as such, validities of our proposed model in which both existence of previous products and switching cost factors are properly considered, are verified. Also, it is found that proposed model can produce flexible patterns of market evolution depending on the degree of the effects of consumer preferences for the attributes of the alternatives on individual-level state transition, rather than following S-shaped curve assumed a priori. Empirically, it is found that in various scenarios with diverse combinations of prices, IPTV is more likely to take advantageous positions over Digital Cable TV in obtaining subscribers. Meanwhile, despite inferiorities in many technological attributes, Analog Cable TV, which is regarded as previous product in our analysis, is likely to be substituted by new services gradually rather than abruptly thanks to the advantage in low service charge and existence of high switching cost in fixed charged broadcasting service market.

  • PDF

Assessment of Estimated Daily Intakes of Artificial Sweeteners from Non-alcoholic Beverages in Children and Adolescents (어린이와 청소년의 비알콜성음료 섭취에 따른 인공감미료 섭취량 평가)

  • Kim, Sung-Dan;Moon, Hyun-Kyung;Lee, Jib-Ho;Chang, Min-Su;Shin, Young;Jung, Sun-Ok;Yun, Eun-Sun;Jo, Han-Bin;Kim, Jung-Hun
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.43 no.8
    • /
    • pp.1304-1316
    • /
    • 2014
  • The aims of this study were to estimate daily intakes of artificial sweeteners from beverages and liquid teas as well as evaluate their potential health risks in Korean children and adolescents (1 to 19 years old). Dietary intake assessment was conducted using actual levels of aspartame, acesulfame-K, and sucralose in non-alcoholic beverages (651 beverages and 87 liquid teas), and food consumption amounts were drawn from "The Fourth Korea National Health and Nutrition Examination Survey (2007~2009)". To estimate dietary intake of non-alcoholic beverages, a total of 6,082 children and adolescents (Scenario I) were compared to 1,704 non-alcoholic beverage consumption subjects (Scenario II). The estimated daily intake of artificial sweeteners was calculated based on point estimates and probabilistic estimates. The values of probabilistic artificial sweeteners intakes were presented by a Monte Carlo approach considering probabilistic density functions of variables. The level of safety for artificial sweeteners was evaluated by comparisons with acceptable daily intakes (ADI) of aspartame (0~40 mg/kg bw/day), acesulfame-K (0~15 mg/kg bw/day), and sucralose (0~15 mg/kg bw/day) set by the World Health Organization. For total children and adolescents (Scenario I), mean daily intakes of aspartame, acesulfame-K, and sucralose estimated by probabilistic estimates using Monte Carlo simulation were 0.09, 0.01, and 0.04 mg/kg bw/day, respectively, and 95th percentile daily intakes were 0.30, 0.02, and 0.13 mg/kg bw/day, respectively. For consumers-only (Scenario II), mean daily intakes of aspartame, acesulfame-K, and sucralose estimated by probabilistic estimates using Monte Carlo simulation were 0.52, 0.03, and 0.22 mg/kg bw/day, respectively, and 95th percentile daily intakes were 1.80, 0.12, and 0.75 mg/kg bw/day, respectively. For scenarios I and II, neither aspartame, acesulfame-K, nor sucralose had a mean and 95th percentile intake that exceeded 5.06% of ADI.

Differential Effects of Recovery Efforts on Products Attitudes (제품태도에 대한 회복노력의 차별적 효과)

  • Kim, Cheon-GIl;Choi, Jung-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.33-58
    • /
    • 2008
  • Previous research has presupposed that the evaluation of consumer who received any recovery after experiencing product failure should be better than the evaluation of consumer who did not receive any recovery. The major purposes of this article are to examine impacts of product defect failures rather than service failures, and to explore effects of recovery on postrecovery product attitudes. First, this article deals with the occurrence of severe and unsevere failure and corresponding service recovery toward tangible products rather than intangible services. Contrary to intangible services, purchase and usage are separable for tangible products. This difference makes it clear that executing an recovery strategy toward tangible products is not plausible right after consumers find out product failures. The consumers may think about backgrounds and causes for the unpleasant events during the time gap between product failure and recovery. The deliberation may dilutes positive effects of recovery efforts. The recovery strategies which are provided to consumers experiencing product failures can be classified into three types. A recovery strategy can be implemented to provide consumers with a new product replacing the old defective product, a complimentary product for free, a discount at the time of the failure incident, or a coupon that can be used on the next visit. This strategy is defined as "a rewarding effort." Meanwhile a product failure may arise in exchange for its benefit. Then the product provider can suggest a detail explanation that the defect is hard to escape since it relates highly to the specific advantage to the product. The strategy may be called as "a strengthening effort." Another possible strategy is to recover negative attitude toward own brand by giving prominence to the disadvantages of a competing brand rather than the advantages of its own brand. The strategy is reflected as "a weakening effort." This paper emphasizes that, in order to confirm its effectiveness, a recovery strategy should be compared to being nothing done in response to the product failure. So the three types of recovery efforts is discussed in comparison to the situation involving no recovery effort. The strengthening strategy is to claim high relatedness of the product failure with another advantage, and expects the two-sidedness to ease consumers' complaints. The weakening strategy is to emphasize non-aversiveness of product failure, even if consumers choose another competitive brand. The two strategies can be effective in restoring to the original state, by providing plausible motives to accept the condition of product failure or by informing consumers of non-responsibility in the failure case. However the two may be less effective strategies than the rewarding strategy, since it tries to take care of the rehabilitation needs of consumers. Especially, the relative effect between the strengthening effort and the weakening effort may differ in terms of the severity of the product failure. A consumer who realizes a highly severe failure is likely to attach importance to the property which caused the failure. This implies that the strengthening effort would be less effective under the condition of high product severity. Meanwhile, the failing property is not diagnostic information in the condition of low failure severity. Consumers would not pay attention to non-diagnostic information, and with which they are not likely to change their attitudes. This implies that the strengthening effort would be more effective under the condition of low product severity. A 2 (product failure severity: high or low) X 4 (recovery strategies: rewarding, strengthening, weakening, or doing nothing) between-subjects design was employed. The particular levels of product failure severity and the types of recovery strategies were determined after a series of expert interviews. The dependent variable was product attitude after the recovery effort was provided. Subjects were 284 consumers who had an experience of cosmetics. Subjects were first given a product failure scenario and were asked to rate the comprehensibility of the failure scenario, the probability of raising complaints against the failure, and the subjective severity of the failure. After a recovery scenario was presented, its comprehensibility and overall evaluation were measured. The subjects assigned to the condition of no recovery effort were exposed to a short news article on the cosmetic industry. Next, subjects answered filler questions: 42 items of the need for cognitive closure and 16 items of need-to-evaluate. In the succeeding page a subject's product attitude was measured on an five-item, six-point scale, and a subject's repurchase intention on an three-item, six-point scale. After demographic variables of age and sex were asked, ten items of the subject's objective knowledge was checked. The results showed that the subjects formed more favorable evaluations after receiving rewarding efforts than after receiving either strengthening or weakening efforts. This is consistent with Hoffman, Kelley, and Rotalsky (1995) in that a tangible service recovery could be more effective that intangible efforts. Strengthening and weakening efforts also were effective compared to no recovery effort. So we found that generally any recovery increased products attitudes. The results hint us that a recovery strategy such as strengthening or weakening efforts, although it does not contain a specific reward, may have an effect on consumers experiencing severe unsatisfaction and strong complaint. Meanwhile, strengthening and weakening efforts were not expected to increase product attitudes under the condition of low severity of product failure. We can conclude that only a physical recovery effort may be recognized favorably as a firm's willingness to recover its fault by consumers experiencing low involvements. Results of the present experiment are explained in terms of the attribution theory. This article has a limitation that it utilized fictitious scenarios. Future research deserves to test a realistic effect of recovery for actual consumers. Recovery involves a direct, firsthand experience of ex-users. Recovery does not apply to non-users. The experience of receiving recovery efforts can be relatively more salient and accessible for the ex-users than for non-users. A recovery effort might be more likely to improve product attitude for the ex-users than for non-users. Also the present experiment did not include consumers who did not have an experience of the products and who did not perceive the occurrence of product failure. For the non-users and the ignorant consumers, the recovery efforts might lead to decreased product attitude and purchase intention. This is because the recovery trials may give an opportunity for them to notice the product failure.

  • PDF

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

NFC-based Smartwork Service Model Design (NFC 기반의 스마트워크 서비스 모델 설계)

  • Park, Arum;Kang, Min Su;Jun, Jungho;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.157-175
    • /
    • 2013
  • Since Korean government announced 'Smartwork promotion strategy' in 2010, Korean firms and government organizations have started to adopt smartwork. However, the smartwork has been implemented only in a few of large enterprises and government organizations rather than SMEs (small and medium enterprises). In USA, both Yahoo! and Best Buy have stopped their flexible work because of its reported low productivity and job loafing problems. In addition, according to the literature on smartwork, we could draw obstacles of smartwork adoption and categorize them into the three types: institutional, organizational, and technological. The first category of smartwork adoption obstacles, institutional, include the difficulties of smartwork performance evaluation metrics, the lack of readiness of organizational processes, limitation of smartwork types and models, lack of employee participation in smartwork adoption procedure, high cost of building smartwork system, and insufficiency of government support. The second category, organizational, includes limitation of the organization hierarchy, wrong perception of employees and employers, a difficulty in close collaboration, low productivity with remote coworkers, insufficient understanding on remote working, and lack of training about smartwork. The third category, technological, obstacles include security concern of mobile work, lack of specialized solution, and lack of adoption and operation know-how. To overcome the current problems of smartwork in reality and the reported obstacles in literature, we suggest a novel smartwork service model based on NFC(Near Field Communication). This paper suggests NFC-based Smartwork Service Model composed of NFC-based Smartworker networking service and NFC-based Smartwork space management service. NFC-based smartworker networking service is comprised of NFC-based communication/SNS service and NFC-based recruiting/job seeking service. NFC-based communication/SNS Service Model supplements the key shortcomings that existing smartwork service model has. By connecting to existing legacy system of a company through NFC tags and systems, the low productivity and the difficulty of collaboration and attendance management can be overcome since managers can get work processing information, work time information and work space information of employees and employees can do real-time communication with coworkers and get location information of coworkers. Shortly, this service model has features such as affordable system cost, provision of location-based information, and possibility of knowledge accumulation. NFC-based recruiting/job-seeking service provides new value by linking NFC tag service and sharing economy sites. This service model has features such as easiness of service attachment and removal, efficient space-based work provision, easy search of location-based recruiting/job-seeking information, and system flexibility. This service model combines advantages of sharing economy sites with the advantages of NFC. By cooperation with sharing economy sites, the model can provide recruiters with human resource who finds not only long-term works but also short-term works. Additionally, SMEs (Small Medium-sized Enterprises) can easily find job seeker by attaching NFC tags to any spaces at which human resource with qualification may be located. In short, this service model helps efficient human resource distribution by providing location of job hunters and job applicants. NFC-based smartwork space management service can promote smartwork by linking NFC tags attached to the work space and existing smartwork system. This service has features such as low cost, provision of indoor and outdoor location information, and customized service. In particular, this model can help small company adopt smartwork system because it is light-weight system and cost-effective compared to existing smartwork system. This paper proposes the scenarios of the service models, the roles and incentives of the participants, and the comparative analysis. The superiority of NFC-based smartwork service model is shown by comparing and analyzing the new service models and the existing service models. The service model can expand scope of enterprises and organizations that adopt smartwork and expand the scope of employees that take advantages of smartwork.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Eurasian Naval Power on Display: Sino-Russian Naval Exercises under Presidents Xi and Putin (유라시아 지역의 해군 전력 과시: 시진핑 주석과 푸틴 대통령 체제 하에 펼쳐지는 중러 해상합동훈련)

  • Richard Weitz
    • Maritime Security
    • /
    • v.5 no.1
    • /
    • pp.1-53
    • /
    • 2022
  • One manifestation of the contemporary era of renewed great power competition has been the deepening relationship between China and Russia. Their strengthening military ties, notwithstanding their lack of a formal defense alliance, have been especially striking. Since China and Russia deploy two of the world's most powerful navies, their growing maritime cooperation has been one of the most significant international security developments of recent years. The Sino-Russian naval exercises, involving varying platforms and locations, have built on years of high-level personnel exchanges, large Russian weapons sales to China, the Sino-Russia Treaty of Friendship, and other forms of cooperation. Though the joint Sino-Russian naval drills began soon after Beijing and Moscow ended their Cold War confrontation, these exercises have become much more important during the last decade, essentially becoming a core pillar of their expanding defense partnership. China and Russia now conduct more naval exercises in more places and with more types of weapons systems than ever before. In the future, Chinese and Russian maritime drills will likely encompass new locations, capabilities, and partners-including possibly the Arctic, hypersonic delivery systems, and novel African, Asian, and Middle East partners-as well as continue such recent innovations as conducting joint naval patrols and combined arms maritime drills. China and Russia pursue several objectives through their bilateral naval cooperation. The Treaty of Good-Neighborliness and Friendly Cooperation Between the People's Republic of China and the Russian Federation lacks a mutual defense clause, but does provide for consultations about common threats. The naval exercises, which rehearse non-traditional along with traditional missions (e.g., counter-piracy and humanitarian relief as well as with high-end warfighting), provide a means to enhance their response to such mutual challenges through coordinated military activities. Though the exercises may not realize substantial interoperability gains regarding combat capabilities, the drills do highlight to foreign audiences the Sino-Russian capacity to project coordinated naval power globally. This messaging is important given the reliance of China and Russia on the world's oceans for trade and the two countries' maritime territorial disputes with other countries. The exercises can also improve their national military capabilities as well as help them learn more about the tactics, techniques, and procedures of each other. The rising Chinese Navy especially benefits from working with the Russian armed forces, which have more experience conducting maritime missions, particularly in combat operations involving multiple combat arms, than the People's Liberation Army (PLA). On the negative side, these exercises, by enhancing their combat capabilities, may make Chinese and Russian policymakers more willing to employ military force or run escalatory risks in confrontations with other states. All these impacts are amplified in Northeast Asia, where the Chinese and Russian navies conduct most of their joint exercises. Northeast Asia has become an area of intensifying maritime confrontations involving China and Russia against the United States and Japan, with South Korea situated uneasily between them. The growing ties between the Chinese and Russian navies have complicated South Korean-U.S. military planning, diverted resources from concentrating against North Korea, and worsened the regional security environment. Naval planners in the United States, South Korea, and Japan will increasingly need to consider scenarios involving both the Chinese and Russian navies. For example, South Korean and U.S. policymakers need to prepare for situations in which coordinated Chinese and Russian military aggression overtaxes the Pentagon, obligating the South Korean Navy to rapidly backfill for any U.S.-allied security gaps that arise on the Korean Peninsula. Potentially reinforcing Chinese and Russian naval support to North Korea in a maritime confrontation with South Korea and its allies would present another serious challenge. Building on the commitment of Japan and South Korea to strengthen security ties, future exercises involving Japan, South Korea, and the United States should expand to consider these potential contingencies.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.