• Title/Summary/Keyword: Variable Control

Search Result 4,753, Processing Time 0.042 seconds

The Relations between Financial Constraints and Dividend Smoothing of Innovative Small and Medium Sized Enterprises (혁신형 중소기업의 재무적 제약과 배당스무딩간의 관계)

  • Shin, Min-Shik;Kim, Soo-Eun
    • Korean small business review
    • /
    • v.31 no.4
    • /
    • pp.67-93
    • /
    • 2009
  • The purpose of this paper is to explore the relations between financial constraints and dividend smoothing of innovative small and medium sized enterprises(SMEs) listed on Korea Securities Market and Kosdaq Market of Korea Exchange. The innovative SMEs is defined as the firms with high level of R&D intensity which is measured by (R&D investment/total sales) ratio, according to Chauvin and Hirschey (1993). The R&D investment plays an important role as the innovative driver that can increase the future growth opportunity and profitability of the firms. Therefore, the R&D investment have large, positive, and consistent influences on the market value of the firm. In this point of view, we expect that the innovative SMEs can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. And also, we expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Aivazian et al.(2006) exert that the financial unconstrained firms with the high accessibility to capital market can adjust dividend payment faster than the financial constrained firms. We collect the sample firms among the total SMEs listed on Korea Securities Market and Kosdaq Market of Korea Exchange during the periods from January 1999 to December 2007 from the KIS Value Library database. The total number of firm-year observations of the total sample firms throughout the entire period is 5,544, the number of firm-year observations of the dividend firms is 2,919, and the number of firm-year observations of the non-dividend firms is 2,625. About 53%(or 2,919) of these total 5,544 observations involve firms that make a dividend payment. The dividend firms are divided into two groups according to the R&D intensity, such as the innovative SMEs with larger than median of R&D intensity and the noninnovative SMEs with smaller than median of R&D intensity. The number of firm-year observations of the innovative SMEs is 1,506, and the number of firm-year observations of the noninnovative SMEs is 1,413. Furthermore, the innovative SMEs are divided into two groups according to level of financial constraints, such as the financial unconstrained firms and the financial constrained firms. The number of firm-year observations of the former is 894, and the number of firm-year observations of the latter is 612. Although all available firm-year observations of the dividend firms are collected, deletions are made in the case of financial industries such as banks, securities company, insurance company, and other financial services company, because their capital structure and business style are widely different from the general manufacturing firms. The stock repurchase was involved in dividend payment because Grullon and Michaely (2002) examined the substitution hypothesis between dividends and stock repurchases. However, our data structure is an unbalanced panel data since there is no requirement that the firm-year observations data are all available for each firms during the entire periods from January 1999 to December 2007 from the KIS Value Library database. We firstly estimate the classic Lintner(1956) dividend adjustment model, where the decision to smooth dividend or to adopt a residual dividend policy depends on financial constraints measured by market accessibility. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between current payout rato and target payout ratio each year. In the Lintner model, dependent variable is the current dividend per share(DPSt), and independent variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt). We hypothesized that firms adjust partially the gap between the current dividend per share(DPSt) and the target payout ratio(Ω) each year, when the past dividend per share(DPSt-1) deviate from the target payout ratio(Ω). We secondly estimate the expansion model that extend the Lintner model by including the determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory. In the expansion model, dependent variable is the current dividend per share(DPSt), explanatory variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt), and control variables are the current capital expenditure ratio(CEAt), the current leverage ratio(LEVt), the current operating return on assets(ROAt), the current business risk(RISKt), the current trading volume turnover ratio(TURNt), and the current dividend premium(DPREMt). In these control variables, CEAt, LEVt, and ROAt are the determinants suggested by the residual dividend theory and the agency theory, ROAt and RISKt are the determinants suggested by the dividend signaling theory, TURNt is the determinant suggested by the transactions cost theory, and DPREMt is the determinant suggested by the catering theory. Furthermore, we thirdly estimate the Lintner model and the expansion model by using the panel data of the financial unconstrained firms and the financial constrained firms, that are divided into two groups according to level of financial constraints. We expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, because the former can finance more easily the investment funds through the market accessibility than the latter. We analyzed descriptive statistics such as mean, standard deviation, and median to delete the outliers from the panel data, conducted one way analysis of variance to check up the industry-specfic effects, and conducted difference test of firms characteristic variables between innovative SMEs and noninnovative SMEs as well as difference test of firms characteristic variables between financial unconstrained firms and financial constrained firms. We also conducted the correlation analysis and the variance inflation factors analysis to detect any multicollinearity among the independent variables. Both of the correlation coefficients and the variance inflation factors are roughly low to the extent that may be ignored the multicollinearity among the independent variables. Furthermore, we estimate both of the Lintner model and the expansion model using the panel regression analysis. We firstly test the time-specific effects and the firm-specific effects may be involved in our panel data through the Lagrange multiplier test that was proposed by Breusch and Pagan(1980), and secondly conduct Hausman test to prove that fixed effect model is fitter with our panel data than the random effect model. The main results of this study can be summarized as follows. The determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory explain significantly the dividend policy of the innovative SMEs. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between the current payout ratio and the target payout ratio each year. In the core variables of Lintner model, the past dividend per share has more effects to dividend smoothing than the current earnings per share. These results suggest that the innovative SMEs maintain stable and long run dividend policy which sustains the past dividend per share level without corporate special reasons. The main results show that dividend adjustment speed of the innovative SMEs is faster than that of the noninnovative SMEs. This means that the innovative SMEs with high level of R&D intensity can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. The other main results show that dividend adjustment speed of the financial unconstrained SMEs is faster than that of the financial constrained SMEs. This means that the financial unconstrained firms with high accessibility to capital market can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Futhermore, the other additional results show that dividend adjustment speed of the innovative SMEs classified by the Small and Medium Business Administration is faster than that of the unclassified SMEs. They are linked with various financial policies and services such as credit guaranteed service, policy fund for SMEs, venture investment fund, insurance program, and so on. In conclusion, the past dividend per share and the current earnings per share suggested by the Lintner model explain mainly dividend adjustment speed of the innovative SMEs, and also the financial constraints explain partially. Therefore, if managers can properly understand of the relations between financial constraints and dividend smoothing of innovative SMEs, they can maintain stable and long run dividend policy of the innovative SMEs through dividend smoothing. These are encouraging results for Korea government, that is, the Small and Medium Business Administration as it has implemented many policies to commit to the innovative SMEs. This paper may have a few limitations because it may be only early study about the relations between financial constraints and dividend smoothing of the innovative SMEs. Specifically, this paper may not adequately capture all of the subtle features of the innovative SMEs and the financial unconstrained SMEs. Therefore, we think that it is necessary to expand sample firms and control variables, and use more elaborate analysis methods in the future studies.

Interleukin 1 Receptor Antagonist(IL-1ra) Gene Polymorphism in Children with Henoch-$Sch{\ddot{o}}nlein$ Purpura Nephritis (Henoch-$Sch{\ddot{o}}nlein$ Purpura 신염에서 Interleukin 1 Receptor Antagonist(IL-1ra) 유전자 다형성)

  • Hwang, Phil-Kyung;Lee, Jeong-Nye;Chung, Woo-Yeong
    • Childhood Kidney Diseases
    • /
    • v.9 no.2
    • /
    • pp.175-182
    • /
    • 2005
  • Purpose : Interleukin 1 receptor antagonist(IL-1ra) is an endogenous antiinflammatory agent that binds to IL-1 receptor and thus competitively inhibits the binding of IL-1$\alpha$ and IL-1$\beta$. Allele 2 in association with various autoimmune diseases has been reported. In order to evaluate the influence of IL-1ra gene VNTR polymorphism on the susceptibility to HSP and its possible association with disease severity, manifested by severe renal involvement and renal sequelae, we studied the incidence of carriage rate and allele frequency of the 2 repeats of IL-1ra allele 2($IL1RN^{*}2$) of the IL-1ra gene in children with HSP with and without renal involvement. Methods : The IL-1ra gene polymorphisms were determined in children with HSP with(n=40) or without nephritis(n=34) who had been diagnosed at Busan Paik Hospital and the control groups(n=163). Gene polymorphism was identified by PCR amplification of the genomic DNA. Results : The allelic frequency and carriage rate of $IL1RN^{*}1$ were found most frequently in patients with HSP and in controls. The allelic frequency of $IL1RN^{*}2$ was higher in patients with HSP compared to that of controls($4.7\%\;vs.\;2.5\%$, P=0.794). The carriage rate of $IL1RN^{*}2$ was higher In patients with HSP compared to that of controls($8.1\%\;vs.\;6.8\%$, P=0.916). The allelic frequency of $IL1RN^{*}2$ was higher in patients with HSP nephritis compared to that of HSP($5.3\%\;vs.\;2.9\%$, P=0.356). The carriage rate of $IL1RN^{*}2$ was higher in Patients with HSP nephritis compared to that of HSP($10.0\%\;vs.\;5.9\%$, P=0.523). Among 13 patients with heavy proteinuria(>1.0 g), 11 had $IL1RN^{*}1$, 1 had $IL1RN^{*}2$ and the others had $IL1RN^{*}4$. At the time of last follow up 4 patients had sustained proteinuria and their genotype was $IL1RN^{*}1$. Conclusion : The allelic frequency and carriage rate of $IL1RN^{*}1$ were found most frequently in patients with HSP and in controls. Our study suggests that the carriage rate and allele frequency of the 2-repeats of IL-1lra allele 2($IL1RN^{*}2$) of the IL-1ra gene may not be associated with susceptibility and severity of renal involvement in children with HSP (J Korean Soc Pediatr Nephrol 2005;9:175-182)

  • PDF

Intake of Snacks, and Perceptions and Use of Food and Nutrition Labels by Middle School Students in Chuncheon Area (춘천지역 중학생들의 간식 섭취 실태와 식품·영양표시에 대한 인식 및 이용실태)

  • Kim, Yoon-Sun;Kim, Bok-Ran
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.41 no.9
    • /
    • pp.1265-1273
    • /
    • 2012
  • The purpose of this study was to investigate the BMI, intake of snacks, and perceptions and use of food and nutrition labels by middle school students (144 boys and 189 girls) in Chuncheon area. The average height and weight of boys were $171.0{\pm}6.4$ cm and $61.0{\pm}11.4$ kg, respectively, whereas those of girls were $160.0{\pm}4.8$ cm and $50.8{\pm}6.6$ kg, respectively. Average body mass index (BMI) of boys and girls were $20.8{\pm}3.3$ and $19.8{\pm}2.4$, respectively (p<0.01). Dietary intake attitude score of girls ($34.39{\pm}5.66$) was higher than that of boys ($33.92{\pm}5.40$) (p<0.05). Subjects bought and ate snacks 1 to 3 times per week (40.2%) by themselves, and most consumed snacks were cookies (23.1%), instant noodles (16.2%), ice cream (13.2%), and candy and chocolates (13.2%). The most important factor in purchasing of snacks was 'taste' ($4.49{\pm}0.67$). When subjects bought processed foods, the rates of reading food labels was 86.6%. The most important factor of the food labels was 'expiration date' (42.9%). The degree of reading food labels on processed foods by girls ($22.70{\pm}5.72$) was higher than that of boys ($20.96{\pm}5.35$) (p<0.01). Of the 13.2% of subjects that did not read food labels, the reason why was that they were not interested (50.0%). Of the 78.4% of subjects that read nutrition labels, the most important component of the nutrition labels was 'calories' (75.9%). The main reason for reading nutrition labels was 'to control weight' (45.6%). In general, use of food labels correlated positively with dietary intake attitude score (p<0.05) and use of nutrition labels (p<0.01). Using multiple regression analysis, we found that 'usefulness of dietary life' was the most significant variable that affects the importance of food and nutrition labels. Therefore, development of an educational program on food and nutrition labels for adolescents will be effective in improving dietary life.

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.

Relationships between Micronutrient Contents in Soils and Crops of Plastic Film House (시설재배 토양과 작물 잎 중의 미량원소 함량 관계)

  • Chung, Jong-Bae;Kim, Bok-Jin;Ryu, Kwan-Sig;Lee, Seung-Ho;Shin, Hyun-Jin;Hwang, Tae-Kyung;Choi, Hee-Youl;Lee, Yong-Woo;Lee, Yoon-Jeong;Kim, Jong-Jib
    • Korean Journal of Environmental Agriculture
    • /
    • v.25 no.3
    • /
    • pp.217-227
    • /
    • 2006
  • Micronutrient status in soils and crops of plastic film house and their relationship were investigated. Total 203 plastic film houses were selected (red pepper, 66; cucumber, 63; tomato, 74) in Yeongnam region and soil and leaf samples were collected. Hot-water extractable B and 0.1 N HCl extractable Cu, Zn, Fe, and Mn in soil samples and total micronutrients in leaf samples were analyzed. Contents Zn, Fe, and Mn in most of the investigated soils were higher than the upper limits of optimum level for general crop cultivation. Contents of Cu in most soils of cucumber and tomato cultivation were higher than the upper limit of optimum level, but Cu contents in about 30% of red pepper cultivation soils were below the sufficient level. Contents of B in most soils of cucumber and tomato were above the sufficient level but in 48% of red pepper cultivation soils B were found to be deficient. Micronutrient contents in leaf of investigated crops were much variable. Contents of B, Fe, and Mn were mostly within the sufficient levels, while in 71% of red pepper samples Cu was under deficient level and in 44% of cucumber samples Cu contents were higher than the upper limit of sufficient level. Contents of Zn in red pepper and cucumber samples were mostly within the sufficient level but in 62% of tomato samples Zn contents were under deficient condition. However, any visible deficiency or toxicity symptoms of micronutrients were not found in the crops. No consistent relationships were found between micronutrient contents in soil and leaf, and this indicates that growth and absorption activity of root and interactions among the nutrients in soil might be important factors in overall micronutrient uptake of crops. For best management of micronutrients in plastic film house, much attention should be focused on the management of soil and plant characteristics which control the micronutrient uptake of crops.

An Analysis on the Conditions for Successful Economic Sanctions on North Korea : Focusing on the Maritime Aspects of Economic Sanctions (대북경제제재의 효과성과 미래 발전 방향에 대한 고찰: 해상대북제재를 중심으로)

  • Kim, Sang-Hoon
    • Strategy21
    • /
    • s.46
    • /
    • pp.239-276
    • /
    • 2020
  • The failure of early economic sanctions aimed at hurting the overall economies of targeted states called for a more sophisticated design of economic sanctions. This paved way for the advent of 'smart sanctions,' which target the supporters of the regime instead of the public mass. Despite controversies over the effectiveness of economic sanctions as a coercive tool to change the behavior of a targeted state, the transformation from 'comprehensive sanctions' to 'smart sanctions' is gaining the status of a legitimate method to impose punishment on states that do not conform to international norms, the nonproliferation of weapons of mass destruction in this particular context of the paper. The five permanent members of the United Nations Security Council proved that it can come to an accord on imposing economic sanctions over adopting resolutions on waging military war with targeted states. The North Korean nuclear issue has been the biggest security threat to countries in the region, even for China out of fear that further developments of nuclear weapons in North Korea might lead to a 'domino-effect,' leading to nuclear proliferation in the Northeast Asia region. Economic sanctions had been adopted by the UNSC as early as 2006 after the first North Korean nuclear test and has continually strengthened sanctions measures at each stage of North Korean weapons development. While dubious of the effectiveness of early sanctions on North Korea, recent sanctions that limit North Korea's exports of coal and imports of oil seem to have an impact on the regime, inducing Kim Jong-un to commit to peaceful talks since 2018. The purpose of this paper is to add a variable to the factors determining the success of economic sanctions on North Korea: preventing North Korea's evasion efforts by conducting illegal transshipments at sea. I first analyze the cause of recent success in the economic sanctions that led Kim Jong-un to engage in talks and add the maritime element to the argument. There are three conditions for the success of the sanctions regime, and they are: (1) smart sanctions, targeting commodities and support groups (elites) vital to regime survival., (2) China's faithful participation in the sanctions regime, and finally, (3) preventing North Korea's maritime evasion efforts.

The Effect of VDT Work on Vision and Eye Symptoms among Workers in a TV Manufacturing Plant (텔레비젼(TV)생산업체 근로자들의 영상단말기(VDT)작업이 시력과 안증상에 미치는 영향)

  • Woo, Kuck-Hyeun;Choi, Gwang-Seo;Jung, Young-Yeon;Han, Gu-Wung;Park, Jung-Han;Lee, Jong-Hyeob
    • Journal of Preventive Medicine and Public Health
    • /
    • v.25 no.3 s.39
    • /
    • pp.247-268
    • /
    • 1992
  • This study was conducted to evaluate the effect of VDT work on eyes and vision among workers in a TV manufacturing plant. The study subjects consisted of 264 screen workers and 74 non-screen workers who were less than 40 years old male and had no history of opthalmic diseases such as corneal opacities, trauma, keratitis, etc and whose visual acuity on pre-employment health examination by Han's test chart was 1.0 or above. The screen workers were divided into two groups by actual time for screen work in a day : Group I, 60 workers, lesser than 4 hours a day and group II, 204 workers, more than 4 hours a day. From July to October 1992 a questionnaire was administered to all the study subjects for the general charateristics and subjective eye symptoms after which the opthalmologic tests such as visual acuity, spherical equivalent, lacrimal function, ocular pressure, slit lamp test, fundoscopy were conducted by one opthalmologist. The proportion of workers whose present visual acuity was decreased more than 0.15 in comparison with that on the pre-employment health examination by Han's test chart was 20.6% in Group II. 15.0% in Group I and 14.9% in non-screen workers. However, the differences in proportion were not statistically significant. The proportion of workers with decreased visual acuity was not associated with the age, working duration, use of magnifying glass and type of shift work (independent variables) in all of the three groups. However, screen workers working under poor illumination had a higher proportion of persons with decreased visual acuity than those working under adequate illumination (P<0.05) . The proportion of workers whose near vision was decreased was 27.5% in Group II, 18.3% in Group I, and 28.4% in non-screen workers and these differences in proportion were not statistically significant. Changes of near vision were not associated with 4 independent variables in all of the three groups. Six out of seven subjective eye symptoms except tearing were more common in Group I than in non-screen workers and more common in Group II than in Group I (P<0.01). Mean of the total scores for seven subjective symptoms of each worker(2 points for always, 1 point for sometimes, 0 point for never) was not significantly different between workers with decreased visual acuity and workers with no vision change. However, mean of the total scores for Group II was higher than those for the Group I and non-screen workers (P<0.01). Total eye symptom scores were significantly correlated with the grade of screen work, use of magnifying glass, and type of shift work. There was no independent variable which was correlated with the difference in visual acuity between the pre-employment health examination and the present state, the difference between far and near visions, lacrimal function, ocular pressure, and spherical equivalent. Multiple linear regression analysis for the subjective eye symptom scores revealed a positive linear relationship with actual time for screen work and shift work(P<0.01). In this study it was not observed that the VDT work decreased visual acuity but it induces subjective eye symptoms such as eye fatigue, blurred vision, ocular discomfort, etc. Maintenance of adequate illumination in the work place and control of excessive VDT work are recommended to prevent such eye symptoms.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Consumer's Negative Brand Rumor Acceptance and Rumor Diffusion (소비자의 부정적 브랜드 루머의 수용과 확산)

  • Lee, Won-jun;Lee, Han-Suk
    • Asia Marketing Journal
    • /
    • v.14 no.2
    • /
    • pp.65-96
    • /
    • 2012
  • Brand has received much attention from considerable marketing research. When consumers consume product or services, they are exposed to a lot of brand related stimuli. These contain brand personality, brand experience, brand identity, brand communications and so on. A special kind of new crisis occasionally confronting companies' brand management today is the brand related rumor. An important influence on consumers' purchase decision making is the word-of-mouth spread by other consumers and most decisions are influenced by other's recommendations. In light of this influence, firms have reasonable reason to study and understand consumer-to-consumer communication such as brand rumor. The importance of brand rumor to marketers is increasing as the number of internet user and SNS(social network service) site grows. Due to the development of internet technology, people can spread rumors without the limitation of time, space and place. However relatively few studies have been published in marketing journals and little is known about brand rumors in the marketplace. The study of rumor has a long history in all major social science. But very few studies have dealt with the antecedents and consequences of any kind of brand rumor. Rumor has been generally described as a story or statement in general circulation without proper confirmation or certainty as to fact. And it also can be defined as an unconfirmed proposition, passed along from people to people. Rosnow(1991) claimed that rumors were transmitted because people needed to explain ambiguous and uncertain events and talking about them reduced associated anxiety. Especially negative rumors are believed to have the potential to devastate a company's reputation and relations with customers. From the perspective of marketer, negative rumors are considered harmful and extremely difficult to control in general. It is becoming a threat to a company's sustainability and sometimes leads to negative brand image and loss of customers. Thus there is a growing concern that these negative rumors can damage brands' reputations and lead them to financial disaster too. In this study we aimed to distinguish antecedents of brand rumor transmission and investigate the effects of brand rumor characteristics on rumor spread intention. We also found key components in personal acceptance of brand rumor. In contextualist perspective, we tried to unify the traditional psychological and sociological views. In this unified research approach we defined brand rumor's characteristics based on five major variables that had been found to influence the process of rumor spread intention. The five factors of usefulness, source credibility, message credibility, worry, and vividness, encompass multi level elements of brand rumor. We also selected product involvement as a control variable. To perform the empirical research, imaginary Korean 'Kimch' brand and related contamination rumor was created and proposed. Questionnaires were collected from 178 Korean samples. Data were collected from college students who have been experienced the focal product. College students were regarded as good subjects because they have a tendency to express their opinions in detail. PLS(partial least square) method was adopted to analyze the relations between variables in the equation model. The most widely adopted causal modeling method is LISREL. However it is poorly suited to deal with relatively small data samples and can yield not proper solutions in some cases. PLS has been developed to avoid some of these limitations and provide more reliable results. To test the reliability using SPSS 16 s/w, Cronbach alpha was examined and all the values were appropriate showing alpha values between .802 and .953. Subsequently, confirmatory factor analysis was conducted successfully. And structural equation modeling has been used to analyze the research model using smartPLS(ver. 2.0) s/w. Overall, R2 of adoption of rumor is .476 and R2 of intention of rumor transmission is .218. The overall model showed a satisfactory fit. The empirical results can be summarized as follows. According to the results, the variables of brand rumor characteristic such as source credibility, message credibility, worry, and vividness affect argument strength of rumor. And argument strength of rumor also affects rumor intention. On the other hand, the relationship between perceived usefulness and argument strength of rumor is not significant. The moderating effect of product involvement on the relations between argument strength of rumor and rumor W.O.M intention is not supported neither. Consequently this study suggests some managerial and academic implications. We consider some implications for corporate crisis management planning, PR and brand management. This results show marketers that rumor is a critical factor for managing strong brand assets. Also for researchers, brand rumor should become an important thesis of their interests to understand the relationship between consumer and brand. Recently many brand managers and marketers have focused on the short-term view. They just focused on strengthen the positive brand image. According to this study we suggested that effective brand management requires managing negative brand rumors with a long-term view of marketing decisions.

  • PDF