• Title/Summary/Keyword: Relative value

Search Result 2,873, Processing Time 0.032 seconds

The Effect of Application of Cattle Slurry on Dry Matter Yield and Feed Values of Tall Fescue (Festuca arundinacea Schreb.) in Uncultivated Rice Paddy (유휴 논 토양에서 액상 우분뇨의 시용이 톨 페스큐의 건물수량과 사료가치에 미치는 영향)

  • Jo, Ik-Hwan
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.27 no.1
    • /
    • pp.9-20
    • /
    • 2007
  • This experiment was conducted to investigate effects of application of diluted and undiluted cattle slurry with water on seasonal and annual dry matter yields and feed values of tall fescue in the uncultivated rice paddy and it was compared with chemical fertilizer in order to determine optimal application season and dilution level of cattle slurry. When diluted or undiluted cattle slurry with water was applied to uncultivated rice paddy, annual dry matter yields showed 11.31 to 14.81 ton DM/ha (average 13.13 ton DM/ha) for diluted and 10.57 to 12.51 ton DM/ha (average 11.50 ton DM/ha) for undiluted cattle slurries, these had a higher dry matter yield than those of no fertilizer (9.21 ton DM/ha). Furthermore, separate application of early spring and summer (SA plots), separate application of early and late spring, and summer (SUA plots) fur undiluted cattle slurries, and whole application of spring (DS plots), separate application of early spring and summer (DSA plots), separate application of early and late spring, and summer (DSUA plots) for diluted cattle slurries were significantly (P<0.05) higher for annual dry matter yield than no fertilizer plots. Plots applied chemical fertilizer with nitrogen (N), phorphorus (P) and potassium (K) had 15.38 ton DM/ha annually, resulted in significantly (P<0.05) higher DM yield than chemical fertilizer containing P and K, and no fertilizer plots. Moreover, average annual DM yield for the chemical fertilizer with P and K was lower than that of cattle slurry applications. The efinciency of DM production for mineral nitrogen of chemical fertilizers was annually average 31.3 kg DM/kg N. In terms of cutting time of tall fescue, it was lowered in the order of 2nd growth followed by 1st and 3rd growth. However, efficiencies of annual DM production of nitrogen for diluted and undiluted cattle slurries were 26.1 and 15.3 kg DM/kg N, respectively, especially, highest in 2nd growth. While, efficiencies of DM production for cattle slurry versus for mineral nitrogen were 48.9 (undiluted) and 83.4% (diluted), respectively. For annual crude protein (CP) contents of tall fescue, aqueous cattle slurry applications showed 9,9 to 11.6%, which were significantly (P<0.05) higher than no fertilization (9.5%) and chemical fertilizer (9.0 to 9.8%), but annual average NDF and ADF contents were lowest in no fertilization. On the contrary, relative feed value (RFV) and total digestible nutrients (TDN) of no fertilizer plots were significantly (P<0.05) higher than the other plots. The application of cattle slurry and their dilution significantly increased yields of crude protein and total digestible nutrients compared with no and/or P and K fertilizers (P<0.05). These trends were much conspicuous in water-diluted cattle slurries applied in the early and late spring and summer, separately (DSUA plots).

The Effectiveness of Fiscal Policies for R&D Investment (R&D 투자 촉진을 위한 재정지원정책의 효과분석)

  • Song, Jong-Guk;Kim, Hyuk-Joon
    • Journal of Technology Innovation
    • /
    • v.17 no.1
    • /
    • pp.1-48
    • /
    • 2009
  • Recently we have found some symptoms that R&D fiscal incentives might not work well what it has intended through the analysis of current statistics of firm's R&D data. Firstly, we found that the growth rate of R&D investment in private sector during the recent decade has been slowdown. The average of growth rate (real value) of R&D investment is 7.1% from 1998 to 2005, while it was 13.9% from 1980 to 1997. Secondly, the relative share of R&D investment of SME has been decreased to 21%('05) from 29%('01), even though the tax credit for SME has been more beneficial than large size firm, Thirdly, The R&D expenditure of large size firms (besides 3 leading firms) has not been increased since late of 1990s. We need to find some evidence whether fiscal incentives are effective in increasing firm's R&D investment. To analyse econometric model we use firm level unbalanced panel data for 4 years (from 2002 to 2005) derived from MOST database compiled from the annual survey, "Report on the Survey of Research and Development in Science and Technology". Also we use fixed effect model (Hausman test results accept fixed effect model with 1% of significant level) and estimate the model for all firms, large firms and SME respectively. We have following results from the analysis of econometric model. For large firm: i ) R&D investment responds elastically (1.20) to sales volume. ii) government R&D subsidy induces R&D investment (0.03) not so effectively. iii) Tax price elasticity is almost unity (-0.99). iv) For large firm tax incentive is more effective than R&D subsidy For SME: i ) Sales volume increase R&D investment of SME (0.043) not so effectively. ii ) government R&D subsidy is crowding out R&D investment of SME not seriously (-0.0079) iii) Tax price elasticity is very inelastic (-0.054) To compare with other studies, Koga(2003) has a similar result of tax price elasticity for Japanese firm (-1.0036), Hall((l992) has a unit tax price elasticity, Bloom et al. (2002) has $-0.354{\sim}-0.124$ in the short run. From the results of our analysis we recommend that government R&D subsidy has to focus on such an areas like basic research and public sector (defense, energy, health etc.) not overlapped private R&D sector. For SME government has to focus on establishing R&D infrastructure. To promote tax incentive policy, we need to strengthen the tax incentive scheme for large size firm's R&D investment. We recommend tax credit for large size film be extended to total volume of R&D investment.

  • PDF

Community Ecological Study on the Quercus acuta Forests in Bogildo-Island (보길도(甫吉島) 붉가시나무림(林)의 군락생태학적(群落生態學的) 연구(硏究))

  • Kim, Chong-Young;Lee, Jeong-Seok;Oh, Kwang-In;Jang, Seok-Ki;Park, Jin-Hong
    • Journal of Korean Society of Forest Science
    • /
    • v.89 no.5
    • /
    • pp.618-629
    • /
    • 2000
  • This study was carried out to investigate ecological niche of Quercus acuta communities in Bogildo-island from July to October, 1998. This island is occupied by a subtropical evergreen broad-leaved forests. The study on community ecology of Q. acuta, mostly dominant species of subtropical forests, is very important for successful forest management. Sampling areas were selected in 16 quadrats, dominated by Q. acuta to examine the vegetation characteristics(plant identification, D.B.H.) and environmental elements (microtopography, altitude, slope degree, aspect, illumination and soil physicochemical properties). On the basis of data from field surveys, importance values were calculated for the dominance of Q. acuta and volume growth was analyzed by tree ring widths. The results obtained were as follows ; 1. The lists of vascular plants in the investigations were identified as 54 families, 91 genera, 113 species, 9 varieties, 1 formae. It appeared that 45 kinds were evergreen, 6 kinds(Camellia japonica, Ligustrum japonicum, Eurya japonica, Smilax china, Trachelospermum asiaticum var. intermedium, Carex lanceolata) were commonly observed in all plots and 5 species(Cinnamomum japonicum, Ardisia japonica, Cymbidium goeringii, Dryopteris bissetiana, Viburnum erosum) were most highly observed in all plots(over 80%). 2. The dominating species per strata were, Quercus acuta, Castanopsis cuspidata sp. Quercus salicina, Pinus thunbergii, Prunus sargentii in tree layer, Camellia Japonica, Ligustrum japonicum, Quercus acuta, Eurya japonica, Castanopsis cuspidata sp. in subtree layer, Camellia japonica, Ligustrum japonicum, Smilax china, Cinnamomum japonicum, Viburnum erosum in shrub layer and Trachelospermum asiaticum var. intermedium, Ardisia japonica, Carex lanceolata, Camellia japonica(seedlings), Quercus acuta(seedlings) in herb layer, all in descending orders. 3. Quercus acuta could be suggested as shade intolerant tree, considering the distribution in southern, western, nothern and eastern slopes in the descending orders. 4. Mean relative illumination in the forest is 0.89 % and it is relatively low in brightness. 5. Sustainment of Quercus acuta community couldn't be confirmed by judging from their reverse J curve in even-aged forest, as shown in D.B.H. distribution analysis. 6. The result of annual ring width analysis(mean ; 2.44 mm) showed three stages, such as a gentle increasing(1~12 year ; 2.04 mm), a relatively steep increasing(13~22 year ; 2.95 mm) and decreasing or stagnating(23 year after ; 2.41 mm).

  • PDF

Evaluation of the Parameters of Soil Potassium Supplying Power for Predicting Yield Response, K2O Uptake and Optiumum K2O Application Levels in Paddy Soils (수도(水稻)의 가리시비반응(加里施肥反応)과 시비량추정(施肥量推定)을 위한 가리공급력(加里供給力) 측정방법(測定方法) 평가(評価) -I. Q/I 관계(関係)에 의(依)한 가리(加里) 공급력측정(供給力測定)과 시비반응(施肥反応))

  • Park, Yang-Ho;An, Soo-Bong;Park, Chon-Suh
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.16 no.1
    • /
    • pp.42-49
    • /
    • 1983
  • In order to find out the possibility of predicting fertilizer K requirement from the K supplying capacity of soil, the relative K activity ratio, Kas/kai, the potential buffering capacity of $K^+$ ($PBC^k$ ; the liner regression coefficient) and its activity ratio ($AR^k_o$ ; $^{k+}$/${\sqrt{Ca^{+2}+Mg^{+2}}}$ in mol/l) at ${\delta}K$ = O, in the Q/I relationships of Beckett(1964), were determined for the soils before flooding and the samples taken at heading stage of transplanted rice in pot experiment. These parameters assumed as the K supplying capacity of soils were subjected for the investigation through correlation stady between themselves and other factors such as grain yield or the amounts of $K_2O$ uptake by rice plant at harvest. The results may be summarized as follows; 1. The potassium supplying power of the flooded soil was considered to be ruled by the amounts of exchangeable K before flooding, since there was little change in exchangeable K concentration from no-exchangeable K during the incubation periods of 67 days. 2. The $PBC^k$ values, in soils before flooding were 0.027, 0.014 and 0.009, where as the $AR^k_o{\times}10^{-3}$ values were 9.1, 7.6, and 15.4, respectively, in clay, loamy and sandy loam soils. 3. The $PBC^k$ values, determined in the soil samples taken at heading stage, varied little compared with the values of orignal soil, regardless of those different fertilizer treatments and textures, showing the possibility of using them as a factor for the improvement of soil to increase the efficiency of fertilizer K. 4. The significant yield responses to potassium fertilizer application were observed wherever the $AR^k_o$ values in soil at heading stage drop down to the original $AR^k_o$ values, regardless of any levels of fertilizer application. 5. The higher correlations between the gain yield or the amounts of $K_2O$ uptake and by the use of both soil factors of $PBC^k$ and $AR^k_o$ at heading stage were observed compared with the use of any single factor. 6. The Kas/Kai value in the soil, estimated prior to the experiment, had high possitive correlation with the $AR^k_o$ determined in the soil at heading stage and could be used as a soil factor for predicting potassium fertilizer requirement.

  • PDF

Effect of Eddy on the Cycle of 210Po and 234 in the central Region of Korean East Sea (동해 중부해역에서 210Po과 234Th의 순환에 대한 소용돌이의 영향)

  • YANG, HAN SOEB;KIM, SOUNG SOO;LEE, JAE CHUL
    • 한국해양학회지
    • /
    • v.30 no.4
    • /
    • pp.279-287
    • /
    • 1995
  • The vertical profiles of natural 210Pb, 210Po and 234Th activities were measured for the upper 100 m of water column at three stations in the middle region of the Korean East Sea during May 1992. And the distribution of these radionuclides was discussed associated with the formation of warm eddy or water mass. The main thermocline was maintained between the depth of 50 and 100 m at the southern station (Sta. A1), and between the depth of 10 to 50 m at the coastal station of Sockcho (Sta. B10). Contrastingly, a main thermocline at Sta. A10, which locates near the center of warm eddy, was observed below 230 m depth. Between 50 and 220 m depth of Sta. A10 is there a relatively homogeneous water mass of 10.1${\pm}$0.5$^{\circ}C$, which is significantly higher in temperature and lower in nutrient than the other two stations. It seems to be due to sinking of the warm surface water in which nutrients were completely consumed. Both 210Pb and 210Po show the highest concentration at Sta. A1 and the lowest at Sta. B10 among the three stations. Also, the 210Pb activity is generally higher in the upper layer than in the lower layer, while 210Po activity represents the reversed pattern at all three stations. At Sta. A1 and Sta. B10, the activities of 210Po relative to its parent 210Pb were deficient in the water column above the main thermocline, but were excess below the thermocline. However, the station near the center of warm eddy(Sta. A10), shows no excess of 210Po in the depths below 50 m, although its defficiency is found in the upper layer like the other stations. At Sta. A1 and b10. 234Th activities are slightly lower in the surface mixed layer than in the deeper region However, at Sta. A10, 234Th activity in the upper 30 m is higher than below 50 m or in the same depth of the other stations, probably because of the high concentration of particulate matter. The residence time of 210Po in the surface mixed layer at Sta. A10 is 0.4 year, much shorter than at the other two stations(about one year). Above 100 m depth, the residence times of 234Th range from 18 to 30 other two stations(about on year). Above 100 m depth, the residence times of 234Th range from 18 to 30 days at all stations, without significant regional variation. The percentages of recycled 210Po within the thermocline are 39% and 92% at Sta. A1 and Sta. B10, respectively. Much higher value at Sta. B10 may be due to a thin thickness of the mixed layer as well as the slower recycling rate of 210Po in the main thermocline.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

The Correction Factor of Sensitivity in Gamma Camera - Based on Whole Body Bone Scan Image - (감마카메라의 Sensitivity 보정 Factor에 관한 연구 - 전신 뼈 영상을 중심으로 -)

  • Jung, Eun-Mi;Jung, Woo-Young;Ryu, Jae-Kwang;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.208-213
    • /
    • 2008
  • Purpose: Generally a whole body bone scan has been known as one of the most frequently executed exams in the nuclear medicine fields. Asan medical center, usually use various gamma camera systems - manufactured by PHILIPS (PRECEDENCE, BRIGHTVIEW), SIEMENS (ECAM, ECAM signature, ECAM plus, SYMBIA T2), GE (INFINIA) - to execute whole body scan. But, as we know, each camera's sensitivity is not same so it is hard to consistent diagnosis of patients. So our purpose is when we execute whole body bone scans, we exclude uncontrollable factors and try to correct controllable factors such as inherent sensitivity of gamma camera. In this study, we're going to measure each gamma camera's sensitivity and study about reasonable correction factors of whole body bone scan to follow up patient's condition using different gamma cameras. Materials and Methods: We used the $^{99m}Tc$ flood phantom, it recommend by IAEA recommendation based on general counts rate of a whole body scan and measured counts rates by the use of various gamma cameras - PRECEDENCE, BRIGHTVIEW, ECAM, ECAM signature, ECAM plus, IFINIA - in Asan medical center nuclear medicine department. For measuring sensitivity, all gamma camera equipped LEHR collimator (Low Energy High Resolution multi parallel Collimator) and the $^{99m}Tc$ gamma spectrum was adjusted around 15% window level, the photo peak was set to 140-kev and acquirded for 60 sec and 120 sec in all gamma cameras. In order to verify whether can apply calculated correction factors to whole body bone scan or not, we actually conducted the whole body bone scan to 27 patients and we compared it analyzed that results. Results: After experimenting using $^{99m}Tc$ flood phantom, sensitivity of ECAM plus was highest and other sensitivity order of all gamma camera is ECAM signature, SYMBIA T2, ECAM, BRIGHTVIEW, IFINIA, PRECEDENCE. And yield sensitivity correction factor show each gamma camera's relative sensitivity ratio by yielded based on ECAM's sensitivity. (ECAM plus 1.07, ECAM signature 1.05, SYMBIA T2 1.03, ECAM 1.00, BRIGHTVIEW 0.90, INFINIA 0.83, PRECEDENCE 0.72) When analyzing the correction factor yielded by $^{99m}Tc$ experiment and another correction factor yielded by whole body bone scan, it shows statistically insignificant value (p<0.05) in whole body bone scan diagnosis. Conclusion: In diagnosing the bone metastasis of patients undergoing cancer, whole body bone scan has been conducted as follow up tests due to its good points (high sensitivity, non invasive, easily conducted). But as a follow up study, it's hard to perform whole body bone scan continuously using same gamma camera. If we use same gamma camera to patients, we have to consider effectiveness of equipment's change by time elapsed. So we expect that applying sensitivity correction factor to patients who tested whole body bone scan regularly will add consistence in diagnosis of patients.

  • PDF

Agronomic Characteristics and Productivity of Winter Forage Crop in Sihwa Reclaimed Field (시화 간척지에서 월동 사료작물의 초종 및 품종에 따른 생육특성 및 생산성)

  • Kim, Jong Geun;Wei, Sheng Nan;Li, Yan Fen;Kim, Hak Jin;Kim, Meing Joong;Cheong, Eun Chan
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.40 no.1
    • /
    • pp.19-28
    • /
    • 2020
  • This study was conducted to compare the agronomic characteristics and productivity according to the species and varieties of winter forage crops in reclaimed land. Winter forage crops used in this study were developed in National Institute of Crop Science, RDA. Oats ('Samhan', 'Jopung', 'Taehan', 'Dakyung' and 'Hi-early'), forage barley ('Yeongyang', 'Yuyeon', 'Yujin', 'Dacheng' and 'Yeonho'), rye ('Gogu', 'Jogreen' and 'Daegokgreen') and triticale ('Shinyoung', 'Saeyoung', 'Choyoung', 'Sinseong', 'Minpung' and 'Gwangyoung') were planted in the reclaimed land of Sihwa district in Hwaseong, Gyeonggi-do in the autumn of 2018 and cultivated using each standard cultivation method, and harvested in May 2019(oat and rye: 8 May, barley and triticale: 20 May.) The emergency rate was the lowest in rye (84.4%), and forage barley, oat and triticale were in similar levels (92.8 to 98.8%). Triticale was the lowest (416 tiller/㎡) and oat was the highest (603 tiller/㎡) in tiller number. Rye was the earliest in the heading date (April 21), triticale was April 26, and oat and forage barley were in early May (May 2 and May 5). The plant height was the highest in rye (95.6 cm), and triticale and forage barley was similar (76.3 and 68.3cm) and oat was the lowest (54.2 cm). Dry matter(DM) content of rye was the highest in the average of 46.04% and the others were similar at 35.09~37.54%. Productivity was different among species and varieties, with the highest dry matter yield of forage barley (4,344 kg/ha), oat was similar to barley, and rye and triticale were lowest. 'Dakyoung' and 'Hi-early' were higher in DM yield (4,283 and 5,490 kg/ha), and forage barley were higher in 'Yeonho', 'Yujin' and 'Dacheng' varieties (4,888, 5,433 and 5,582 kg/ha). Crude protein content of oat (6.58%) tended to be the highest, and TDN(total digectible nutrient) content (63.61%) was higher than other varieties. In the RFV(relative feed value), oats averaged 119, while the other three species averaged 92~105. The weight of 1,000 grain was the highest in triticale (43.03 g) and the lowest in rye (31.61 g). In the evaluation of germination rate according to the salt concentration (salinity), the germination rate was maintained at about 80% from 0.2 to 0.4% salinity. The correlation coefficient between germination and salt concentration was high in oat and barley (-0.91 and -0.92) and lowest in rye (-0.66). In conclusion, forage barley and oats showed good productivity in reclaimed land. Adaptability is also different among varieties of forage crops. When growing forage crops in reclaimed land, the selection of highly adaptable species and varieties was recommended.

Structure of Export Competition between Asian NIEs and Japan in the U.S. Import Market and Exchange Rate Effects (한국(韓國)의 아시아신흥공업국(新興工業國) 및 일본(日本)과의 대미수출경쟁(對美輸出競爭) : 환율효과(換率效果)를 중심(中心)으로)

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.3-49
    • /
    • 1990
  • This paper analyzes U.S. demand for imports from Asian NIEs and Japan, utilizing the Almost Ideal Demand System (AIDS) developed by Deaton and Muellbauer, with an emphasis on the effect of changes in the exchange rate. The empirical model assumes a two-stage budgeting process in which the first stage represents the allocation of total U.S. demand among three groups: the Asian NIEs and Japan, six Western developed countries, and the U.S. domestic non-tradables and import competing sector. The second stage represents the allocation of total U.S. imports from the Asian NIEs and Japan among them, by country. According to the AIDS model, the share equation for the Asia NIEs and Japan in U.S. nominal GNP is estimated as a single equation for the first stage. The share equations for those five countries in total U.S. imports are estimated as a system with the general demand restrictions of homogeneity, symmetry and adding-up, together with polynomially distributed lag restrictions. The negativity condition is also satisfied for all cases. The overall results of these complicated estimations, using quarterly data from the first quarter of 1972 to the fourth quarter of 1989, are quite promising in terms of the significance of individual estimators and other statistics. The conclusions drawn from the estimation results and the derived demand elasticities can be summarized as follows: First, the exports of each Asian NIE to the U.S. are competitive with (substitutes for) Japan's exports, while complementary to the exports of fellow NIEs, with the exception of the competitive relation between Hong Kong and Singapore. Second, the exports of each Asian NIE and of Japan to the U.S. are competitive with those of Western developed countries' to the U.S, while they are complementary to the U.S.' non-tradables and import-competing sector. Third, as far as both the first and second stages of budgeting are coneidered, the imports from each Asian NIE and Japan are luxuries in total U.S. consumption. However, when only the second budgeting stage is considered, the imports from Japan and Singapore are luxuries in U.S. imports from the NIEs and Japan, while those of Korea, Taiwan and Hong Kong are necessities. Fourth, the above results may be evidenced more concretely in their implied exchange rate effects. It appears that, in general, a change in the yen-dollar exchange rate will have at least as great an impact, on an NIE's share and volume of exports to the U.S. though in the opposite direction, as a change in the exchange rate of the NIE's own currency $vis-{\grave{a}}-vis$ the dollar. Asian NIEs, therefore, should counteract yen-dollar movements in order to stabilize their exports to the U.S.. More specifically, Korea should depreciate the value of the won relative to the dollar by approximately the same proportion as the depreciation rate of the yen $vis-{\grave{a}}-vis$ the dollar, in order to maintain the volume of Korean exports to the U.S.. In the worst case scenario, Korea should devalue the won by three times the maguitude of the yen's depreciation rate, in order to keep market share in the aforementioned five countries' total exports to the U.S.. Finally, this study provides additional information which may support empirical findings on the competitive relations among the Asian NIEs and Japan. The correlation matrices among the strutures of those five countries' exports to the U.S.. during the 1970s and 1980s were estimated, with the export structure constructed as the shares of each of the 29 industrial sectors' exports as defined by the 3 digit KSIC in total exports to the U.S. from each individual country. In general, the correlation between each of the four Asian NIEs and Japan, and that between Hong Kong and Singapore, are all far below .5, while the ones among the Asian NIEs themselves (except for the one between Hong Kong and Singapore) all greatly exceed .5. If there exists a tendency on the part of the U.S. to import goods in each specific sector from different countries in a relatively constant proportion, the export structures of those countries will probably exhibit a high correlation. To take this hypothesis to the extreme, if the U.S. maintained an absolutely fixed ratio between its imports from any two countries for each of the 29 sectors, the correlation between the export structures of these two countries would be perfect. Therefore, since any two goods purchased in a fixed proportion could be classified as close complements, a high correlation between export structures will imply a complementary relationship between them. Conversely, low correlation would imply a competitive relationship. According to this interpretation, the pattern formed by the correlation coefficients among the five countries' export structures to the U.S. are consistent with the empirical findings of the regression analysis.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.