• Title/Summary/Keyword: u-value

Search Result 2,037, Processing Time 0.045 seconds

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Postoperstive Chemoradiotherapy in Locally Advanced Rectal Cancer (국소 진행된 직장암에서 수술 후 화학방사선요법)

  • Chai, Gyu-Young;Kang, Ki-Mun;Choi, Sang-Gyeong
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.221-227
    • /
    • 2002
  • Purpose : To evaluate the role of postoperative chemoradiotherapy in locally advanced rectal cancer, we retrospectively analyzed the treatment results of patients treated by curative surgical resection and postoperative chemoradiotherapy. Materials and Methods : From April 1989 through December 1998, 119 patients were treated with curative surgery and postoperative chemoradiotherapy for rectal carcinoma in Gyeongsang National University Hospital. Patient age ranged from 32 to 73 years, with a median age of 56 years. Low anterior resection was peformed in 59 patients, and abdominoperineal resection in 60. Forty-three patients were AJCC stage II and 76 were stage III. Radiation was delivered with 6 MV X rays using either AP-PA two fields, AP-PA both lateral four fields, or PA both lateral three fields. Total radiation dose ranged from 40 Gy to 56 Gy. In 73 patients, bolus infusions of 5-FU $(400\;mg/m^2)$ were given during the first and fourth weeks of radiotherapy. After completion of radiotherapy, an additional four to six cycles of 5-FU were given. Oral 5-FU (Furtulone) was given for nine months in 46 patients. Results : Forty $(33.7\%)$ of the 119 patients showed treatment failure. Local failure occurred in 16 $(13.5\%)$ patients, 1 $(2.3\%)$ of 43 stage II patients and 15 $(19.7\%)$ of 76 stage III patients. Distant failure occurred in 31 $(26.1\%)$ patients, among whom 5 $(11.6\%)$ were stage II and 26 $(34.2\%)$ were stage III. Five-year actuarial survival was $56.2\%$ overall, $71.1\%$ in stage II patients and $49.1\%$ in stage III patients (p=0.0008). Five-year disease free survival was $53.3\%$ overall, $68.1\%$ in stage II and $45.8\%$ in stage III (p=0.0006). Multivariate analysis showed that T stage and N stage were significant prognostic factors for five year survival, and that T stage, N stage, and preoperative CEA value were significant prognostic factors for five year disease free survival. Bowel complication occurred in 22 patients, and was treated surgically in 15 $(12.6\%)$, and conservatively in 7 $(5.9\%)$. Conclusion : Postoperative chemoradiotherapy was confirmed to be an effective modality for local control of rectal cancer, but the distant failure rate remained high. More effective modalities should be investigated to lower the distant failure rate.

A Survey of Nutritional Status on Pre-School Children in Korea (학영기전아동(學齡期前兒童)의 영양실태조사(營養實態調査))

  • Ju, Jin-Soon;Oh, Seoung-Ho
    • Journal of Nutrition and Health
    • /
    • v.9 no.2
    • /
    • pp.68-86
    • /
    • 1976
  • The primary purpose of this study is to evaluate the correct nutritional status on pre-school children in Korea. Furthermore, it made an attempt to find and define nutrional problems, and assist in establishment on their nutritional improvement plan. For this, food intake and health condition (physical, clinical, biochemical and parasitological) survey on 109 Pre-school children in both sexes, randomly selected from Yang-Gu area in Gang-Won province and Rea-ju area in Kyong-gy Province, were conducted by means of three-day records, during the two periods of Spring and Fall season in 1975. The results obtained are summerized as follows: 1. The food intake; Average food intake of the subjects per day were $508{\sim}647g$ ($83{\sim}91%$ in vegetable foods and $5.5{\sim}11.7%$ in animal foods) in Yang-gu area, and $587{\sim}698g$ ($88{\sim}89%$ in vegetable foods and $6.3{\sim}7.6%$ in animal foods) in Rea-ju area. 2. The intake of energy and nutrients; a) Calory intake. Average energy intake of subjects per day in Yang-gu area$(1120{\sim}1415kcal)$ were all lower than the Korean Recommended Dietary Allowances (RDA) in either Spring and Fall survey, whereas the subjects in Rea-ju area were lower intake $(1213{\sim}1418kcal)$ than the RDA in the Spring but higher intake$(1516{\sim}1755kcal)$ than the RDA in the Fall, and the average intake were similar level with that of RDA. b) Protein intake. Average protein intake of the subjects per day in Yang-gu area $(33{\sim}43g)$ girl subjects in Rea-ju area $(35{\sim}39g)$ were lower than the RDA in either Spring and Fall survey, whereas the boy subjects in Rea-ju area$(36{\sim}38g)$ were lower in Spring and higher $(49{\sim}57g)$ in the Fall than that of the RDA, but the average $(43{\sim}47g)$ were similar level with the RDA. The protein intake from animal sources in all subjects were much lower $(5.5{\sim}11.7\;of\;total\;protein)$ than the RDA. c) Fat intake. Average fat intake were very lower in all subjects of both area $(14{\sim}24g\;in\;Yang-gu,\;10{\sim}12g\;in\;Rea-ju)$ than that of RDA which is recommended $12{\sim}14%$ of total energy to be supplied from fat. d) Calcium intake. Average calcium intake were very low in all subjects of both area $(264{\sim}355mg\;in\;Yang-gu\;and\;283{\sim}429mg\;in\;Rea-ju)$, especially, these in Spring were about a half level of the RDA, and it was much increased in the Fall due to increased intake of milk, but it was still not enough than the RDA. e) Vitamin A intake. Average intake of V.A ($703{\sim}1465\;IU$ in Ynag-gu and $750{\sim}1521\;IU$ in Rea-ju) were also lower than the RDA, moreover their V-A sources were mainly vegetable, so that the V-A supply might be critical one for the subjected. f) Riboflavin intake. Average riboflavin intake on all subjects in both area except boys in Rea-ju area in Fall, were very lower than the RDA. 3. The physical status; a) Average weight and height of boys aged 4 and 5 in Yang-gu area and girls of aged 5 in Rea-ju area were lower than those of Korean Standard of 1967 report, but those by age of girls in Yang-gu area and boys in Rea-ju area were a little heigher than the Korean Standard. It is, hower, present Korean standard of physical status might be somehow heigher than the 1967, since the socio-economical situation has been much improved during past a decade. So that, if one considered on this sense, the physical status of the subjects on this survey might be somehow lower than those of present Korean standard. b) Average upper arm circumference in both area were no difference each other, and their mean values of age 4, 5 and 6 in boy and girl were 15.6, 16.5, 16.4 and 15.5, 16.5, 16.4cm respectively. c) Average chest girth of boys were similar to those of Korean standard whereas the girls were smaller than the Korea standard. The average head circumference also showed similar tendency with the chest girth. 4. The clinical findings; The most popular clinical signs were angular stomatitis and dental caries, and boys had more heigher incidence then the girls. 5. The biochemical findings; a) Hemoglobin and anemia Average Hb value of boys and girls were 11.4 and 10.9g per 100 ml of blood respectively. The incidence of anemia (Hb value below 11 g/100 ml, by WHO) was increased by age, and girls had more heigher incidence than the boy (34% : 48%). The incidence of anemia in age of 4,5, and 6 in boys and girls were 28%, 41% 34%, and 33%, 50%, 49% respectively. The degree of the anemia was not severe, and the anemia of there subjects may be caused mainly low intake of better quality protein and low iron intake as well. b) Hematocrit. Average Ht value of whole subject were $39.9{\sim}41.6%$. c) Blood plasma protein. Average blood plasma protein contents of whole subjects were $6.6{\sim}7.4gm$ per 100 ml. The incidence of deficient range (<6.0g%, by ICNND) was only one girl of age 4 in yang Gu area. 6. Parasitological findgs; The most popular parasitism were asicris lumbicoides and trichocephalus trichiura, and about 2/3 of the whole subjects were suffering one or more of these parasitism.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Indonesia, Malaysia Airline's aircraft accidents and the Indonesian, Korean, Chinese Aviation Law and the 1999 Montreal Convention

  • Kim, Doo-Hwan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.37-81
    • /
    • 2015
  • AirAsia QZ8501 Jet departed from Juanda International Airport in, Surabaya, Indonesia at 05:35 on Dec. 28, 2014 and was scheduled to arrive at Changi International Airport in Singapore at 08:30 the same day. The aircraft, an Airbus A320-200 crashed into the Java Sea on Dec. 28, 2014 carrying 162 passengers and crew off the coast of Indonesia's second largest city Surabaya on its way to Singapore. Indonesia's AirAsia jet carrying 162 people lost contact with ground control on Dec. 28, 2014. The aircraft's debris was found about 66 miles from the plane's last detected position. The 155 passengers and seven crew members aboard Flight QZ 8501, which vanished from radar 42 minutes after having departed Indonesia's second largest city of Surabaya bound for Singapore early Dec. 28, 2014. AirAsia QZ8501 had on board 137 adult passengers, 17 children and one infant, along with two pilots and five crew members in the aircraft, a majority of them Indonesian nationals. On board Flight QZ8501 were 155 Indonesian, three South Koreans, and one person each from Singapore, Malaysia and the UK. The Malaysia Airlines Flight 370 departed from Kuala Lumpur International Airport on March 8, 2014 at 00:41 local time and was scheduled to land at Beijing's Capital International Airport at 06:30 local time. Malaysia Airlines also marketed as China Southern Airlines Flight 748 (CZ748) through a code-share agreement, was a scheduled international passenger flight that disappeared on 8 March 2014 en route from Kuala Lumpur International Airport to Beijing's Capital International Airport (a distance of 2,743 miles: 4,414 km). The aircraft, a Boeing 777-200ER, last made contact with air traffic control less than an hour after takeoff. Operated by Malaysia Airlines (MAS), the aircraft carried 12 crew members and 227 passengers from 15 nations. There were 227 passengers, including 153 Chinese and 38 Malaysians, according to records. Nearly two-thirds of the passengers on Flight 370 were from China. On April 5, 2014 what could be the wreckage of the ill-fated Malaysia Airlines was found. What appeared to be the remnants of flight MH370 have been spotted drifting in a remote section of the Indian Ocean. Compensation for loss of life is vastly different between US. passengers and non-U.S. passengers. "If the claim is brought in the US. court, it's of significantly more value than if it's brought into any other court." Some victims and survivors of the Indonesian and Malaysia airline's air crash case would like to sue the lawsuit to the United States court in order to receive a larger compensation package for damage caused by an accident that occurred in the sea of Java sea and the Indian ocean and rather than taking it to the Indonesian or Malaysian court. Though each victim and survivor of the Indonesian and Malaysia airline's air crash case will receive an unconditional 113,100 Unit of Account (SDR) as an amount of compensation for damage from Indonesia's AirAsia and Malaysia Airlines in accordance with Article 21, 1 (absolute, strict, no-fault liability system) of the 1999 Montreal Convention. But if Indonesia AirAsia airlines and Malaysia Airlines cannot prove as to the following two points without fault based on Article 21, 2 (presumed faulty system) of the 1999 Montreal Convention, AirAsia of Indonesiaand Malaysia Airlines will be burdened the unlimited liability to the each victim and survivor of the Indonesian and Malaysia airline's air crash case such as (1) such damage was not due to the negligence or other wrongful act or omission of the air carrier or its servants or agents, or (2) such damage was solely due to the negligence or other wrongful act or omission of a third party. In this researcher's view for the aforementioned reasons, and under the laws of China, Indonesia, Malaysia and Korea the Chinese, Indonesian, Malaysia and Korean, some victims and survivors of the crash of the two flights are entitled to receive possibly from more than 113,100 SDR to 5 million US$ from the two airlines or from the Aviation Insurance Company based on decision of the American court. It could also be argued that it is reasonable and necessary to revise the clause referring to bodily injury to a clause mentioning personal injury based on Article 17 of the 1999 Montreal Convention so as to be included the mental injury and condolence in the near future.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF