• Title/Summary/Keyword: U-value

Search Result 2,050, Processing Time 0.042 seconds

Studies on the Rice Yield Decreased by Ground Water Irrigation and Its Preventive Methods (지하수 관개에 의한 수도의 멸준양상과 그 방지책에 관한 연구)

  • 한욱동
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.16 no.1
    • /
    • pp.3225-3262
    • /
    • 1974
  • The purposes of this thesis are to clarify experimentally the variation of ground water temperature in tube wells during the irrigation period of paddy rice, and the effect of ground water irrigation on the growth, grain yield and yield components of the rice plant, and, furthermore, when and why the plant is most liable to be damaged by ground water, and also to find out the effective ground water irrigation methods. The results obtained in this experiment are as follows; 1. The temperature of ground water in tube wells varies according to the location, year, and the depth of the well. The average temperatures of ground water in a tubewells, 6.3m, 8.0m deep are $14.5^{\circ}C$ and $13.1^{\circ}C$, respercively, during the irrigation period of paddy rice (From the middle of June to the end of September). In the former the temperature rises continuously from $12.3^{\circ}C$ to 16.4$^{\circ}C$ and in the latter from $12.4^{\circ}C$ to $13.8^{\circ}C$ during the same period. These temperatures are approximately the same value as the estimated temperatures. The temperature difference between the ground water and the surface water is approximately $11^{\circ}C$. 2. The results obtained from the analysis of the water quality of the "Seoho" reservoir and that of water from the tube well show that the pH values of the ground water and the surface water are 6.35 and 6.00, respectively, and inorganic components such as N, PO4, Na, Cl, SiO2 and Ca are contained more in the ground water than in the surface water while K, SO4, Fe and Mg are contained less in the ground water. 3. The response of growth, yield and yield components of paddy rice to ground water irrigation are as follows; (l) Using ground water irrigation during the watered rice nursery period(seeding date: 30 April, 1970), the chracteristics of a young rice plant, such as plant height, number of leaves, and number of tillers are inferior to those of young rice plants irrigated with surface water during the same period. (2) In cases where ground water and surface water are supplied separately by the gravity flow method, it is found that ground water irrigation to the rice plant delays the stage at which there is a maximum increase in the number of tillers by 6 days. (3) At the tillering stage of rice plant just after transplanting, the effect of ground water irrigation on the increase in the number of tillers is better, compared with the method of supplying surface water throughout the whole irrigation period. Conversely, the number of tillers is decreased by ground water irrigation at the reproductive stage. Plant height is extremely restrained by ground water irrigation. (4) Heading date is clearly delayed by the ground water irrigation when it is practised during the growth stages or at the reproductive stage only. (5) The heading date of rice plants is slightly delayed by irrigation with the gravity flow method as compared with the standing water method. (6) The response of yield and of yield components of rice to ground water irrigation are as follows: \circled1 When ground water irrigation is practised during the growth stages and the reproductive stage, the culm length of the rice plant is reduced by 11 percent and 8 percent, respectively, when compared with the surface water irrigation used throughout all the growth stages. \circled2 Panicle length is found to be the longest on the test plot in which ground water irrigation is practised at the tillering stage. A similar tendency as that seen in the culm length is observed on other test plots. \circled3 The number of panicles is found to be the least on the plot in which ground water irrigation is practised by the gravity flow method throughout all the growth stages of the rice plant. No significant difference is found between the other plots. \circled4 The number of spikelets per panicle at the various stages of rice growth at which_ surface or ground water is supplied by gravity flow method are as follows; surface water at all growth stages‥‥‥‥‥ 98.5. Ground water at all growth stages‥‥‥‥‥‥62.2 Ground water at the tillering stage‥‥‥‥‥ 82.6. Ground water at the reproductive stage ‥‥‥‥‥ 74.1. \circled5 Ripening percentage is about 70 percent on the test plot in which ground water irrigation is practised during all the growth stages and at the tillering stage only. However, when ground water irrigation is practised, at the reproductive stage, the ripening percentage is reduced to 50 percent. This means that 20 percent reduction in the ripening percentage by using ground water irrigation at the reproductive stage. \circled6 The weight of 1,000 kernels is found to show a similar tendency as in the case of ripening percentage i. e. the ground water irrigation during all the growth stages and at the reproductive stage results in a decreased weight of the 1,000 kernels. \circled7 The yield of brown rice from the various treatments are as follows; Gravity flow; Surface water at all growth stages‥‥‥‥‥‥514kg/10a. Ground water at all growth stages‥‥‥‥‥‥428kg/10a. Ground water at the reproductive stage‥‥‥‥‥‥430kg/10a. Standing water; Surface water at all growh stages‥‥‥‥‥‥556kg/10a. Ground water at all growth stages‥‥‥‥‥‥441kg/10a. Ground water at the reproductive stage‥‥‥‥‥‥450kg/10a. The above figures show that ground water irrigation by the gravity flow and by the standing water method during all the growth stages resulted in an 18 percent and a 21 percent decrease in the yield of brown rice, respectively, when compared with surface water irrigation. Also ground water irrigation by gravity flow and by standing water resulted in respective decreases in yield of 16 percent and 19 percent, compared with the surface irrigation method. 4. Results obtained from the experiments on the improvement of ground water irrigation efficiency to paddy rice are as follows; (1) When the standing water irrigation with surface water is practised, the daily average water temperature in a paddy field is 25.2$^{\circ}C$, but, when the gravity flow method is practised with the same irrigation water, the daily average water temperature is 24.5$^{\circ}C$. This means that the former is 0.7$^{\circ}C$ higher than the latter. On the other hand, when ground water is used, the daily water temperatures in a paddy field are respectively 21.$0^{\circ}C$ and 19.3$^{\circ}C$ by practising standing water and the gravity flow method. It can be seen that the former is approximately 1.$0^{\circ}C$ higher than the latter. (2) When the non-water-logged cultivation is practised, the yield of brown rice is 516.3kg/10a, while the yield of brown rice from ground water irrigation plot throughout the whole irrigation period and surface water irrigation plot are 446.3kg/10a and 556.4kg/10a, respectivelely. This means that there is no significant difference in yields between surface water irrigation practice and non-water-logged cultivation, and also means that non-water-logged cultivation results in a 12.6 percent increase in yield compared with the yield from the ground water irrigation plot. (3) The black and white coloring on the inside surface of the water warming ponds has no substantial effect on the temperature of the water. The average daily water temperatures of the various water warming ponds, having different depths, are expressed as Y=aX+b, while the daily average water temperatures at various depths in a water warming pond are expressed as Y=a(b)x (where Y: the daily average water temperature, a,b: constants depending on the type of water warming pond, X; water depth). As the depth of water warning pond is increased, the diurnal difference of the highest and the lowest water temperature is decreased, and also, the time at which the highest water temperature occurs, is delayed. (4) The degree of warming by using a polyethylene tube, 100m in length and 10cm in diameter, is 4~9$^{\circ}C$. Heat exchange rate of a polyethylene tube is 1.5 times higher than that or a water warming channel. The following equation expresses the water warming mechanism of a polyethylene tube where distance from the tube inlet, time in day and several climatic factors are given: {{{{ theta omega (dwt)= { a}_{0 } (1-e- { x} over { PHI v })+ { 2} atop { SUM from { { n}=1} { { a}_{n } } over { SQRT { 1+ {( n omega PHI) }^{2 } } } } LEFT { sin(n omega t+ { b}_{n }+ { tan}^{-1 }n omega PHI )-e- { x} over { PHI v }sin(n omega LEFT ( t- { x} over {v } RIGHT ) + { b}_{n }+ { tan}^{-1 }n omega PHI ) RIGHT } +e- { x} over { PHI v } theta i}}}}{{{{ { theta }_{$\infty$ }(t)= { { alpha theta }_{a }+ { theta }_{ w'} +(S- { B}_{s } ) { U}_{w } } over { beta } , PHI = { { cpDU}_{ omega } } over {4 beta } }}}} where $\theta$$\omega$; discharged water temperature($^{\circ}C$) $\theta$a; air temperature ($^{\circ}C$) $\theta$$\omega$';ponded water temperature($^{\circ}C$) s ; net solar radiation(ly/min) t ; time(tadian) x; tube length(cm) D; diameter(cm) ao,an,bn;constants determined from $\theta$$\omega$(t) varitation. cp; heat capacity of water(cal/$^{\circ}C$ ㎥) U,Ua; overall heat transfer coefficient(cal/$^{\circ}C$ $\textrm{cm}^2$ min-1) $\omega$;1 velocity of water in a polyethylene tube(cm/min) Bs ; heat exchange rate between water and soil(ly/min)

  • PDF

Effect of Exercise on Antioxidant Enzyme Activities of Skeletal Muscle and Liver in STZ-diabetic Rats (STZ-당뇨쥐에서 운동부하가 골격근 및 간의 항산화효소 활성도에 미치는 영향)

  • Seok, Kwang-Ho;Lee, Suck-Kang
    • Journal of Yeungnam Medical Science
    • /
    • v.17 no.1
    • /
    • pp.21-30
    • /
    • 2000
  • Background: The purpose of the present study was to investigate the effect of exercise on the activities of antioxidant enzymes, super oxide dismutase(SOD), glutathione peroxidase(GPX) and catalase(CAT) of skeletal muscle(gastrocnemius) and liver in streptozotocin(STZ) induced diabetic rats. The malondialdehyde(MDA) concentration was also measured as an index of lipid poroxidation of tho tissues by exercise-induced oxidative stresses in diabetic rats. Material and Methods: Male Sprague-Dawley rats were randomly divided into control and STZ-induced diabetic rats. The STZ in citrate buffer solution was injected twice at S days intervals intraperitoneally(50, 70 mg/kg respectively). On the 28th day after the first STZ injection, the diabetic animals were randomly divided into pre- and post-exercise groups, The exercise was introduced to the rats of post-exercise group by treadmill running until exhaution with moderate intensity ($V_{O2max}$: 50-70%) of exercise. The duration of average running time was 2 hours and 19 minutes. Results: The blood glucose concentration was increased(p<0.001) and plasma insulin concentration was decreased(p<0.001) in the diabetic rats. The glycogen concentration in the muscle and liver was decreased by exhaustive exercise in the diabetic rats(p<0.001), In the skeletal muscle, the activities of GPX was increased(p<0.05) and the activities of SOD and CAT were not changed in the diabetic rats compare to those of the control rats. The activities of GPX was not changed by exercise but the activities of SOD(p<0.01) and CAT(p<0.01) were decreased by exercise in the diabetic rats, The concentration of MDA was not changed by exercise in diabetic rats, and the values of pre-exercise and post-exercise diabetic rats were not different from the value those of control rats, In the liver, the activities of SOD was decreased(p<0.01), and the activities of GPX and CAT were not changed in diabetic rats compared to the values of control rats, The activities of SOD, GPX and CAT were not changed by exercise in diabetic rats but the activity of SOD seemed to decrease slightly, The MDA concentration was increased in the diabetic rats compared to the values of control rats(p<0.001), but there was no change of MDA concentration by exercise in diabetic rats, Conclusions: In summary, exhaustive physical exercise did not seem to impose oxidative stress on the skeletal muscle because of due to oxygen free radicals, regardless of the decrease in SOD and CAT in the diabetic rats, In liver tissue, the tissue damage by oxidative stress was observed in diabetic rats but the additional tissue damage by exhaustive physical exercise was not observed.

  • PDF

A study on The U.S.-Korean Trade Friction Prevention and Settlement in the Fields of Information and Telecommunication Industries (한미간(韓美間) 정보통신분야(情報通信分野) 통상마찰예방(通商摩擦豫防)과 해소방안(解消方案)에 관한 연구(硏究))

  • Jung, Jay-Young
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.13
    • /
    • pp.869-895
    • /
    • 2000
  • The US supports the Information and Communication (IC) industry as a strategic one to wield a complete power over the World Market. However, several other countries are also eager to have the support for the IC industry because the industry produces a high added value and has a significant effect on other industries. Korea is not an exception. Korea recently succeeded in the commercialization of CDMA for the first time in the world, after the successful development of TDX. Hence, it is highly likely to get tracked by the US. Although the IC industry is a specific sector of IT, there is a concern that there might be a trade friction between the US and Korea due to a possible competition. It will be very important to prepare a solution in advance so that Korea could prevent the friction and at the same time increase its share domestically and globally. It will be our important task to solve the problem with the minimum cost if the conflict arises unfortunately in the IT area. The parties that have a strong influence on the US trade policy are the think tank group and the IT-related interest group. Therefore, it would be important to have a close relationship with them. We found some implications by analyzing the case of Japan, which has experienced trade frictions with the US over the long period of time in the high tech industry. In order to get rid of those conflicts with the US, the Japanese did the following things : (1) The Japanese government developed supporting theories and also resorted to international support so that the world could support the Japanese theories. (2) Through continual dialogue with the US business people, the Japanese business people sought after solutions to share profits among the Japanese and the US both in the domestic and in the worldwide markets. They focused on lobbying activities to influence the US public opinion to support the Japanese. The specific implementation plan was first to open culture lobby toward opinion leaders who were leaders about the US opinion. The institution, Japan Society, were formed to deliver a high quality lobbying activities. The second plan is economic lobby. They have established Japanese Economic Institute at Washington. They provide information about Japan regularly or irregularly to the US government, research institution, universities, etc., that are interested in Japan. The main objective behind these activities though is to advertise the validity of Japanese policy. Japanese top executives, practical interest groups on international trade, are trying to justify their position by direct contact with the US policy makers. The third one is political lobby. Japan is very careful about this political lobby. It is doing its best not to give impression that Japan is trying to shape the US policy making. It is collecting a vast amount of information to make a correct judgment on situation. It is not tilted toward one political party or the other, and is rather developing a long-term network of people who understand and support the Japanese policy. The following implications were drawn from the experience of Japan. First, the Korean government should develop a long-term plan and execute it to improve the Korean image perceived by American people. Second, the Korean government should begin public relation activities toward the US elite group. It is inevitable to make an effort to advertise Korea to this elite group because this group leads public opinion in the USA. Third, the Korean government needs the development of a relevant policy to elevate the positive atmosphere for advertising toward the US. For example, we need information about to whom and how to about lobbying activities, personnel network who immediately respond to wrong articles about Korea in the US press, and lastly the most recent data bank of Korean support group inside the USA. Fourth, the Korean government should create an atmosphere to facilitate the advertising toward the US. Examples include provision of incentives in tax on the expenses for the advertising toward the US and provision of rewards to those who significantly contribute to the advertising activities. Fifth, the Korean government should perform the role of a bridge between Korean and the US business people. Sixth, the government should promptly analyze the policy of IT industry, a strategic area, and timely distribute information to industries in Korea. Since the Korean government is the only institution that has formal contact with the US government, it is highly likely to provide information of a high quality. The followings are some implications for business institutions. First, Korean business organization should carefully analyze and observe the business policy and managerial conditions of US companies. It is very important to do so because all the trade frictions arise at the business level. Second, it is also very important that the top management of Korean firms contact the opinion leaders of the US. Third, it is critically needed that Korean business people sent to the USA do their part for PR activities. Fourth, it is very important to advertise to American employees in Korean companies. If we cannot convince our American employees, it would be a lot harder to convince regular American. Therefore, it is very important to make the American employees the support group for Korean ways. Fifth, it should try to get much information as early as possible about the US firms policy in the IT area. It should give an enormous effort on early collection of information because by doing so it has more time to respond. Sixth, it should research on the PR cases of foreign enterprise or non-American companies inside the USA. The research needs to identify the success factors and the failure factors. Finally, the business firm will get more valuable information if it analyzes and responds to, according to each medium.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Postoperstive Chemoradiotherapy in Locally Advanced Rectal Cancer (국소 진행된 직장암에서 수술 후 화학방사선요법)

  • Chai, Gyu-Young;Kang, Ki-Mun;Choi, Sang-Gyeong
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.221-227
    • /
    • 2002
  • Purpose : To evaluate the role of postoperative chemoradiotherapy in locally advanced rectal cancer, we retrospectively analyzed the treatment results of patients treated by curative surgical resection and postoperative chemoradiotherapy. Materials and Methods : From April 1989 through December 1998, 119 patients were treated with curative surgery and postoperative chemoradiotherapy for rectal carcinoma in Gyeongsang National University Hospital. Patient age ranged from 32 to 73 years, with a median age of 56 years. Low anterior resection was peformed in 59 patients, and abdominoperineal resection in 60. Forty-three patients were AJCC stage II and 76 were stage III. Radiation was delivered with 6 MV X rays using either AP-PA two fields, AP-PA both lateral four fields, or PA both lateral three fields. Total radiation dose ranged from 40 Gy to 56 Gy. In 73 patients, bolus infusions of 5-FU $(400\;mg/m^2)$ were given during the first and fourth weeks of radiotherapy. After completion of radiotherapy, an additional four to six cycles of 5-FU were given. Oral 5-FU (Furtulone) was given for nine months in 46 patients. Results : Forty $(33.7\%)$ of the 119 patients showed treatment failure. Local failure occurred in 16 $(13.5\%)$ patients, 1 $(2.3\%)$ of 43 stage II patients and 15 $(19.7\%)$ of 76 stage III patients. Distant failure occurred in 31 $(26.1\%)$ patients, among whom 5 $(11.6\%)$ were stage II and 26 $(34.2\%)$ were stage III. Five-year actuarial survival was $56.2\%$ overall, $71.1\%$ in stage II patients and $49.1\%$ in stage III patients (p=0.0008). Five-year disease free survival was $53.3\%$ overall, $68.1\%$ in stage II and $45.8\%$ in stage III (p=0.0006). Multivariate analysis showed that T stage and N stage were significant prognostic factors for five year survival, and that T stage, N stage, and preoperative CEA value were significant prognostic factors for five year disease free survival. Bowel complication occurred in 22 patients, and was treated surgically in 15 $(12.6\%)$, and conservatively in 7 $(5.9\%)$. Conclusion : Postoperative chemoradiotherapy was confirmed to be an effective modality for local control of rectal cancer, but the distant failure rate remained high. More effective modalities should be investigated to lower the distant failure rate.

A Survey of Nutritional Status on Pre-School Children in Korea (학영기전아동(學齡期前兒童)의 영양실태조사(營養實態調査))

  • Ju, Jin-Soon;Oh, Seoung-Ho
    • Journal of Nutrition and Health
    • /
    • v.9 no.2
    • /
    • pp.68-86
    • /
    • 1976
  • The primary purpose of this study is to evaluate the correct nutritional status on pre-school children in Korea. Furthermore, it made an attempt to find and define nutrional problems, and assist in establishment on their nutritional improvement plan. For this, food intake and health condition (physical, clinical, biochemical and parasitological) survey on 109 Pre-school children in both sexes, randomly selected from Yang-Gu area in Gang-Won province and Rea-ju area in Kyong-gy Province, were conducted by means of three-day records, during the two periods of Spring and Fall season in 1975. The results obtained are summerized as follows: 1. The food intake; Average food intake of the subjects per day were $508{\sim}647g$ ($83{\sim}91%$ in vegetable foods and $5.5{\sim}11.7%$ in animal foods) in Yang-gu area, and $587{\sim}698g$ ($88{\sim}89%$ in vegetable foods and $6.3{\sim}7.6%$ in animal foods) in Rea-ju area. 2. The intake of energy and nutrients; a) Calory intake. Average energy intake of subjects per day in Yang-gu area$(1120{\sim}1415kcal)$ were all lower than the Korean Recommended Dietary Allowances (RDA) in either Spring and Fall survey, whereas the subjects in Rea-ju area were lower intake $(1213{\sim}1418kcal)$ than the RDA in the Spring but higher intake$(1516{\sim}1755kcal)$ than the RDA in the Fall, and the average intake were similar level with that of RDA. b) Protein intake. Average protein intake of the subjects per day in Yang-gu area $(33{\sim}43g)$ girl subjects in Rea-ju area $(35{\sim}39g)$ were lower than the RDA in either Spring and Fall survey, whereas the boy subjects in Rea-ju area$(36{\sim}38g)$ were lower in Spring and higher $(49{\sim}57g)$ in the Fall than that of the RDA, but the average $(43{\sim}47g)$ were similar level with the RDA. The protein intake from animal sources in all subjects were much lower $(5.5{\sim}11.7\;of\;total\;protein)$ than the RDA. c) Fat intake. Average fat intake were very lower in all subjects of both area $(14{\sim}24g\;in\;Yang-gu,\;10{\sim}12g\;in\;Rea-ju)$ than that of RDA which is recommended $12{\sim}14%$ of total energy to be supplied from fat. d) Calcium intake. Average calcium intake were very low in all subjects of both area $(264{\sim}355mg\;in\;Yang-gu\;and\;283{\sim}429mg\;in\;Rea-ju)$, especially, these in Spring were about a half level of the RDA, and it was much increased in the Fall due to increased intake of milk, but it was still not enough than the RDA. e) Vitamin A intake. Average intake of V.A ($703{\sim}1465\;IU$ in Ynag-gu and $750{\sim}1521\;IU$ in Rea-ju) were also lower than the RDA, moreover their V-A sources were mainly vegetable, so that the V-A supply might be critical one for the subjected. f) Riboflavin intake. Average riboflavin intake on all subjects in both area except boys in Rea-ju area in Fall, were very lower than the RDA. 3. The physical status; a) Average weight and height of boys aged 4 and 5 in Yang-gu area and girls of aged 5 in Rea-ju area were lower than those of Korean Standard of 1967 report, but those by age of girls in Yang-gu area and boys in Rea-ju area were a little heigher than the Korean Standard. It is, hower, present Korean standard of physical status might be somehow heigher than the 1967, since the socio-economical situation has been much improved during past a decade. So that, if one considered on this sense, the physical status of the subjects on this survey might be somehow lower than those of present Korean standard. b) Average upper arm circumference in both area were no difference each other, and their mean values of age 4, 5 and 6 in boy and girl were 15.6, 16.5, 16.4 and 15.5, 16.5, 16.4cm respectively. c) Average chest girth of boys were similar to those of Korean standard whereas the girls were smaller than the Korea standard. The average head circumference also showed similar tendency with the chest girth. 4. The clinical findings; The most popular clinical signs were angular stomatitis and dental caries, and boys had more heigher incidence then the girls. 5. The biochemical findings; a) Hemoglobin and anemia Average Hb value of boys and girls were 11.4 and 10.9g per 100 ml of blood respectively. The incidence of anemia (Hb value below 11 g/100 ml, by WHO) was increased by age, and girls had more heigher incidence than the boy (34% : 48%). The incidence of anemia in age of 4,5, and 6 in boys and girls were 28%, 41% 34%, and 33%, 50%, 49% respectively. The degree of the anemia was not severe, and the anemia of there subjects may be caused mainly low intake of better quality protein and low iron intake as well. b) Hematocrit. Average Ht value of whole subject were $39.9{\sim}41.6%$. c) Blood plasma protein. Average blood plasma protein contents of whole subjects were $6.6{\sim}7.4gm$ per 100 ml. The incidence of deficient range (<6.0g%, by ICNND) was only one girl of age 4 in yang Gu area. 6. Parasitological findgs; The most popular parasitism were asicris lumbicoides and trichocephalus trichiura, and about 2/3 of the whole subjects were suffering one or more of these parasitism.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Indonesia, Malaysia Airline's aircraft accidents and the Indonesian, Korean, Chinese Aviation Law and the 1999 Montreal Convention

  • Kim, Doo-Hwan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.37-81
    • /
    • 2015
  • AirAsia QZ8501 Jet departed from Juanda International Airport in, Surabaya, Indonesia at 05:35 on Dec. 28, 2014 and was scheduled to arrive at Changi International Airport in Singapore at 08:30 the same day. The aircraft, an Airbus A320-200 crashed into the Java Sea on Dec. 28, 2014 carrying 162 passengers and crew off the coast of Indonesia's second largest city Surabaya on its way to Singapore. Indonesia's AirAsia jet carrying 162 people lost contact with ground control on Dec. 28, 2014. The aircraft's debris was found about 66 miles from the plane's last detected position. The 155 passengers and seven crew members aboard Flight QZ 8501, which vanished from radar 42 minutes after having departed Indonesia's second largest city of Surabaya bound for Singapore early Dec. 28, 2014. AirAsia QZ8501 had on board 137 adult passengers, 17 children and one infant, along with two pilots and five crew members in the aircraft, a majority of them Indonesian nationals. On board Flight QZ8501 were 155 Indonesian, three South Koreans, and one person each from Singapore, Malaysia and the UK. The Malaysia Airlines Flight 370 departed from Kuala Lumpur International Airport on March 8, 2014 at 00:41 local time and was scheduled to land at Beijing's Capital International Airport at 06:30 local time. Malaysia Airlines also marketed as China Southern Airlines Flight 748 (CZ748) through a code-share agreement, was a scheduled international passenger flight that disappeared on 8 March 2014 en route from Kuala Lumpur International Airport to Beijing's Capital International Airport (a distance of 2,743 miles: 4,414 km). The aircraft, a Boeing 777-200ER, last made contact with air traffic control less than an hour after takeoff. Operated by Malaysia Airlines (MAS), the aircraft carried 12 crew members and 227 passengers from 15 nations. There were 227 passengers, including 153 Chinese and 38 Malaysians, according to records. Nearly two-thirds of the passengers on Flight 370 were from China. On April 5, 2014 what could be the wreckage of the ill-fated Malaysia Airlines was found. What appeared to be the remnants of flight MH370 have been spotted drifting in a remote section of the Indian Ocean. Compensation for loss of life is vastly different between US. passengers and non-U.S. passengers. "If the claim is brought in the US. court, it's of significantly more value than if it's brought into any other court." Some victims and survivors of the Indonesian and Malaysia airline's air crash case would like to sue the lawsuit to the United States court in order to receive a larger compensation package for damage caused by an accident that occurred in the sea of Java sea and the Indian ocean and rather than taking it to the Indonesian or Malaysian court. Though each victim and survivor of the Indonesian and Malaysia airline's air crash case will receive an unconditional 113,100 Unit of Account (SDR) as an amount of compensation for damage from Indonesia's AirAsia and Malaysia Airlines in accordance with Article 21, 1 (absolute, strict, no-fault liability system) of the 1999 Montreal Convention. But if Indonesia AirAsia airlines and Malaysia Airlines cannot prove as to the following two points without fault based on Article 21, 2 (presumed faulty system) of the 1999 Montreal Convention, AirAsia of Indonesiaand Malaysia Airlines will be burdened the unlimited liability to the each victim and survivor of the Indonesian and Malaysia airline's air crash case such as (1) such damage was not due to the negligence or other wrongful act or omission of the air carrier or its servants or agents, or (2) such damage was solely due to the negligence or other wrongful act or omission of a third party. In this researcher's view for the aforementioned reasons, and under the laws of China, Indonesia, Malaysia and Korea the Chinese, Indonesian, Malaysia and Korean, some victims and survivors of the crash of the two flights are entitled to receive possibly from more than 113,100 SDR to 5 million US$ from the two airlines or from the Aviation Insurance Company based on decision of the American court. It could also be argued that it is reasonable and necessary to revise the clause referring to bodily injury to a clause mentioning personal injury based on Article 17 of the 1999 Montreal Convention so as to be included the mental injury and condolence in the near future.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF