• Title/Summary/Keyword: consistent

Search Result 9,829, Processing Time 0.045 seconds

The Correction Factor of Sensitivity in Gamma Camera - Based on Whole Body Bone Scan Image - (감마카메라의 Sensitivity 보정 Factor에 관한 연구 - 전신 뼈 영상을 중심으로 -)

  • Jung, Eun-Mi;Jung, Woo-Young;Ryu, Jae-Kwang;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.208-213
    • /
    • 2008
  • Purpose: Generally a whole body bone scan has been known as one of the most frequently executed exams in the nuclear medicine fields. Asan medical center, usually use various gamma camera systems - manufactured by PHILIPS (PRECEDENCE, BRIGHTVIEW), SIEMENS (ECAM, ECAM signature, ECAM plus, SYMBIA T2), GE (INFINIA) - to execute whole body scan. But, as we know, each camera's sensitivity is not same so it is hard to consistent diagnosis of patients. So our purpose is when we execute whole body bone scans, we exclude uncontrollable factors and try to correct controllable factors such as inherent sensitivity of gamma camera. In this study, we're going to measure each gamma camera's sensitivity and study about reasonable correction factors of whole body bone scan to follow up patient's condition using different gamma cameras. Materials and Methods: We used the $^{99m}Tc$ flood phantom, it recommend by IAEA recommendation based on general counts rate of a whole body scan and measured counts rates by the use of various gamma cameras - PRECEDENCE, BRIGHTVIEW, ECAM, ECAM signature, ECAM plus, IFINIA - in Asan medical center nuclear medicine department. For measuring sensitivity, all gamma camera equipped LEHR collimator (Low Energy High Resolution multi parallel Collimator) and the $^{99m}Tc$ gamma spectrum was adjusted around 15% window level, the photo peak was set to 140-kev and acquirded for 60 sec and 120 sec in all gamma cameras. In order to verify whether can apply calculated correction factors to whole body bone scan or not, we actually conducted the whole body bone scan to 27 patients and we compared it analyzed that results. Results: After experimenting using $^{99m}Tc$ flood phantom, sensitivity of ECAM plus was highest and other sensitivity order of all gamma camera is ECAM signature, SYMBIA T2, ECAM, BRIGHTVIEW, IFINIA, PRECEDENCE. And yield sensitivity correction factor show each gamma camera's relative sensitivity ratio by yielded based on ECAM's sensitivity. (ECAM plus 1.07, ECAM signature 1.05, SYMBIA T2 1.03, ECAM 1.00, BRIGHTVIEW 0.90, INFINIA 0.83, PRECEDENCE 0.72) When analyzing the correction factor yielded by $^{99m}Tc$ experiment and another correction factor yielded by whole body bone scan, it shows statistically insignificant value (p<0.05) in whole body bone scan diagnosis. Conclusion: In diagnosing the bone metastasis of patients undergoing cancer, whole body bone scan has been conducted as follow up tests due to its good points (high sensitivity, non invasive, easily conducted). But as a follow up study, it's hard to perform whole body bone scan continuously using same gamma camera. If we use same gamma camera to patients, we have to consider effectiveness of equipment's change by time elapsed. So we expect that applying sensitivity correction factor to patients who tested whole body bone scan regularly will add consistence in diagnosis of patients.

  • PDF

Differential Effects of Recovery Efforts on Products Attitudes (제품태도에 대한 회복노력의 차별적 효과)

  • Kim, Cheon-GIl;Choi, Jung-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.33-58
    • /
    • 2008
  • Previous research has presupposed that the evaluation of consumer who received any recovery after experiencing product failure should be better than the evaluation of consumer who did not receive any recovery. The major purposes of this article are to examine impacts of product defect failures rather than service failures, and to explore effects of recovery on postrecovery product attitudes. First, this article deals with the occurrence of severe and unsevere failure and corresponding service recovery toward tangible products rather than intangible services. Contrary to intangible services, purchase and usage are separable for tangible products. This difference makes it clear that executing an recovery strategy toward tangible products is not plausible right after consumers find out product failures. The consumers may think about backgrounds and causes for the unpleasant events during the time gap between product failure and recovery. The deliberation may dilutes positive effects of recovery efforts. The recovery strategies which are provided to consumers experiencing product failures can be classified into three types. A recovery strategy can be implemented to provide consumers with a new product replacing the old defective product, a complimentary product for free, a discount at the time of the failure incident, or a coupon that can be used on the next visit. This strategy is defined as "a rewarding effort." Meanwhile a product failure may arise in exchange for its benefit. Then the product provider can suggest a detail explanation that the defect is hard to escape since it relates highly to the specific advantage to the product. The strategy may be called as "a strengthening effort." Another possible strategy is to recover negative attitude toward own brand by giving prominence to the disadvantages of a competing brand rather than the advantages of its own brand. The strategy is reflected as "a weakening effort." This paper emphasizes that, in order to confirm its effectiveness, a recovery strategy should be compared to being nothing done in response to the product failure. So the three types of recovery efforts is discussed in comparison to the situation involving no recovery effort. The strengthening strategy is to claim high relatedness of the product failure with another advantage, and expects the two-sidedness to ease consumers' complaints. The weakening strategy is to emphasize non-aversiveness of product failure, even if consumers choose another competitive brand. The two strategies can be effective in restoring to the original state, by providing plausible motives to accept the condition of product failure or by informing consumers of non-responsibility in the failure case. However the two may be less effective strategies than the rewarding strategy, since it tries to take care of the rehabilitation needs of consumers. Especially, the relative effect between the strengthening effort and the weakening effort may differ in terms of the severity of the product failure. A consumer who realizes a highly severe failure is likely to attach importance to the property which caused the failure. This implies that the strengthening effort would be less effective under the condition of high product severity. Meanwhile, the failing property is not diagnostic information in the condition of low failure severity. Consumers would not pay attention to non-diagnostic information, and with which they are not likely to change their attitudes. This implies that the strengthening effort would be more effective under the condition of low product severity. A 2 (product failure severity: high or low) X 4 (recovery strategies: rewarding, strengthening, weakening, or doing nothing) between-subjects design was employed. The particular levels of product failure severity and the types of recovery strategies were determined after a series of expert interviews. The dependent variable was product attitude after the recovery effort was provided. Subjects were 284 consumers who had an experience of cosmetics. Subjects were first given a product failure scenario and were asked to rate the comprehensibility of the failure scenario, the probability of raising complaints against the failure, and the subjective severity of the failure. After a recovery scenario was presented, its comprehensibility and overall evaluation were measured. The subjects assigned to the condition of no recovery effort were exposed to a short news article on the cosmetic industry. Next, subjects answered filler questions: 42 items of the need for cognitive closure and 16 items of need-to-evaluate. In the succeeding page a subject's product attitude was measured on an five-item, six-point scale, and a subject's repurchase intention on an three-item, six-point scale. After demographic variables of age and sex were asked, ten items of the subject's objective knowledge was checked. The results showed that the subjects formed more favorable evaluations after receiving rewarding efforts than after receiving either strengthening or weakening efforts. This is consistent with Hoffman, Kelley, and Rotalsky (1995) in that a tangible service recovery could be more effective that intangible efforts. Strengthening and weakening efforts also were effective compared to no recovery effort. So we found that generally any recovery increased products attitudes. The results hint us that a recovery strategy such as strengthening or weakening efforts, although it does not contain a specific reward, may have an effect on consumers experiencing severe unsatisfaction and strong complaint. Meanwhile, strengthening and weakening efforts were not expected to increase product attitudes under the condition of low severity of product failure. We can conclude that only a physical recovery effort may be recognized favorably as a firm's willingness to recover its fault by consumers experiencing low involvements. Results of the present experiment are explained in terms of the attribution theory. This article has a limitation that it utilized fictitious scenarios. Future research deserves to test a realistic effect of recovery for actual consumers. Recovery involves a direct, firsthand experience of ex-users. Recovery does not apply to non-users. The experience of receiving recovery efforts can be relatively more salient and accessible for the ex-users than for non-users. A recovery effort might be more likely to improve product attitude for the ex-users than for non-users. Also the present experiment did not include consumers who did not have an experience of the products and who did not perceive the occurrence of product failure. For the non-users and the ignorant consumers, the recovery efforts might lead to decreased product attitude and purchase intention. This is because the recovery trials may give an opportunity for them to notice the product failure.

  • PDF

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Scale and Scope Economies and Prospect for the Korea's Banking Industry (우리나라 은행산업(銀行産業)의 효율성분석(效率性分析)과 제도개선방안(制度改善方案))

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.14 no.2
    • /
    • pp.109-153
    • /
    • 1992
  • This paper estimates a translog cost function for the Korea's banking industry and derives various implications on the prospect for the Korean banking structure in the future based on the estimated efficiency indicators for the banking sector. The Korean banking industry is permitted to operate trust business to the full extent and the security business to a limited extent, while it is formally subjected to the strict, specialized banking system. Security underwriting and investment businesses are allowed in a very limited extent only for stocks and bonds of maturity longer than three year and only up to 100 percent of the bank paid-in capital. Until the end of 1991, the ceiling was only up to 25 percent of the total balance of the demand deposits. However, they are prohibited from the security brokerage business. While the in-house integration of security businesses with the traditional business of deposit and commercial lending is restrictively regulated as such, Korean banks can enter the security business by establishing subsidiaries in the industry. This paper, therefore, estimates the efficiency indicators as well as the cost functions, identifying the in-house integrated trust business and security investment business as important banking activities, for various cases where both the production and the intermediation function approaches in modelling the financial intermediaries are separately applied, and the banking businesses of deposit, lending and security investment as one group and the trust businesses as another group are separately and integrally analyzed. The estimation results of the efficiency indicators for various cases are summarized in Table 1 and Table 2. First, security businesses exhibit economies of scale but also economies of scope with traditional banking activities, which implies that in-house integration of the banking and security businesses may not be a nonoptimal banking structure. Therefore, this result further implies that the transformation of Korea's banking system from the current, specialized system to the universal banking system will not impede the improvement of the banking industry's efficiency. Second, the lending businesses turn out to be subjected to diseconomies of scale, while exhibiting unclear evidence for economies of scope. In sum, it implies potential efficiency gain of the continued in-house integration of the lending activity. Third, the continued integration of the trust businesses seems to contribute to improving the efficiency of the banking businesses, since the trust businesses exhibit economies of scope. Fourth, deposit services and fee-based activities, such as foreign exchange and credit card businesses, exhibit economies of scale but constant returns to scope, which implies, the possibility of separating those businesses from other banking and trust activities. The recent trend of the credit card business being operated separately from other banking activities by an independent identity in Korea as well as in the global banking market seems to be consistent with this finding. Then, how can the possibility of separating deposit services from the remaining activities be interpreted? If one insists a strict definition of commercial banking that is confined to deposit and commercial lending activities, separating the deposit service will suggest a resolution or a disappearance of banking, itself. Recently, however, there has been a suggestion that separating banks' deposit and lending activities by allowing a depository institution which specialize in deposit taking and investing deposit fund only in the safest securities such as government securities to administer the deposit activity will alleviate the risk of a bank run. This method, in turn, will help improve the safety of the payment system (Robert E. Litan, What should Banks Do? Washington, D.C., The Brookings Institution, 1987). In this context, the possibility of separating the deposit activity will imply that a new type of depository institution will arise naturally without contradicting the efficiency of the banking businesses, as the size of the banking market grows in the future. Moreover, it is also interesting to see additional evidences confirming this statement that deposit taking and security business are cost complementarity but deposit taking and lending businesses are cost substitute (see Table 2 for cost complementarity relationship in Korea's banking industry). Finally, it has been observed that the Korea's banking industry is lacking in the characteristics of natural monopoly. Therefore, it may not be optimal to encourage the merger and acquisition in the banking industry only for the purpose of improving the efficiency.

  • PDF

A study on Development Process of Fish Aquaculture in Japan - Case by Seabream Aquaculture - (일본 어류 양식업의 발전과정과 산지교체에 관한 연구 : 참돔양식업을 사례로)

  • 송정헌
    • The Journal of Fisheries Business Administration
    • /
    • v.34 no.2
    • /
    • pp.75-90
    • /
    • 2003
  • When we think of fundamental problems of the aquaculture industry, there are several strict conditions, and consequently the aquaculture industry is forced to change. Fish aquaculture has a structural supply surplus in production, aggravation of fishing grounds, stagnant low price due to recent recession, and drastic change of distribution circumstances. It is requested for us to initiate discussion on such issue as “how fish aquaculture establishes its status in the coastal fishery\ulcorner, will fish aquaculture grow in the future\ulcorner, and if so “how it will be restructured\ulcorner” The above issues can be observed in the mariculture of yellow tail, sea scallop and eel. But there have not been studied concerning seabream even though the production is over 30% of the total production of fish aquaculture in resent and it occupied an important status in the fish aquaculture. The objectives of this study is to forecast the future movement of sea bream aquaculture. The first goal of the study is to contribute to managerial and economic studies on the aquaculture industry. The second goal is to identify the factors influencing the competition between production areas and to identify the mechanisms involved. This study will examine the competitive power in individual producing area, its behavior, and its compulsory factors based on case study. Producing areas will be categorized according to following parameters : distance to market and availability of transportation, natural environment, the time of formation of producing areas (leaderㆍfollower), major production items, scale of business and producing areas, degree of organization in production and sales. As a factor in shaping the production area of sea bream aquaculture, natural conditions especially the water temperature is very important. Sea bream shows more active feeding and faster growth in areas located where the water temperature does not go below 13∼14$^{\circ}C$ during the winter. Also fish aquaculture is constrained by the transporting distance. Aquacultured yellowtail is a mass-produced and a mass-distributed item. It is sold a unit of cage and transported by ship. On the other hand, sea bream is sold in small amount in markets and transported by truck; so, the transportation cost is higher than yellow tail. Aquacultured sea bream has different product characteristics due to transport distance. We need to study live fish and fresh fish markets separately. Live fish was the original product form of aquacultured sea bream. Transportation of live fish has more constraints than the transportation of fresh fish. Death rate and distance are highly correlated. In addition, loading capacity of live fish is less than fresh fish. In the case of a 10 ton truck, live fish can only be loaded up to 1.5 tons. But, fresh fish which can be placed in a box can be loaded up to 5 to 6 tons. Because of this characteristics, live fish requires closer location to consumption area than fresh fish. In the consumption markets, the size of fresh fish is mainly 0.8 to 2kg.Live fish usually goes through auction, and quality is graded. Main purchaser comes from many small-sized restaurants, so a relatively small farmer and distributer can sell it. Aquacultured sea bream has been transacted as a fresh fish in GMS ,since 1993 when the price plummeted. Economies of scale works in case of fresh fish. The characteristics of fresh fish is as follows : As a large scale demander, General Merchandise Stores are the main purchasers of sea bream and the size of the fish is around 1.3kg. It mainly goes through negotiation. Aquacultured sea bream has been established as a representative food in General Merchandise Stores. GMS require stable and mass supply, consistent size, and low price. And Distribution of fresh fish is undertook by the large scale distributers, which can satisfy requirements of GMS. The market share in Tokyo Central Wholesale Market shows Mie Pref. is dominating in live fish. And Ehime Pref. is dominating in fresh fish. Ehime Pref. showed remarkable growth in 1990s. At present, the dealings of live fish is decreasing. However, the dealings of fresh fish is increasing in Tokyo Central Wholesale Market. The price of live fish is decreasing more than one of fresh fish. Even though Ehime Pref. has an ideal natural environment for sea bream aquaculture, its entry into sea bream aquaculture was late, because it was located at a further distance to consumers than the competing producing areas. However, Ehime Pref. became the number one producing areas through the sales of fresh fish in the 1990s. The production volume is almost 3 times the production volume of Mie Pref. which is the number two production area. More conversion from yellow tail aquaculture to sea bream aquaculture is taking place in Ehime Pref., because Kagosima Pref. has a better natural environment for yellow tail aquaculture. Transportation is worse than Mie Pref., but this region as a far-flung producing area makes up by increasing the business scale. Ehime Pref. increases the market share for fresh fish by creating demand from GMS. Ehime Pref. has developed market strategies such as a quick return at a small profit, a stable and mass supply and standardization in size. Ehime Pref. increases the market power by the capital of a large scale commission agent. Secondly Mie Pref. is close to markets and composed of small scale farmers. Mie Pref. switched to sea bream aquaculture early, because of the price decrease in aquacultured yellou tail and natural environmental problems. Mie Pref. had not changed until 1993 when the price of the sea bream plummeted. Because it had better natural environment and transportation. Mie Pref. has a suitable water temperature range required for sea bream aquaculture. However, the price of live sea bream continued to decline due to excessive production and economic recession. As a consequence, small scale farmers are faced with a market price below the average production cost in 1993. In such kind of situation, the small-sized and inefficient manager in Mie Pref. was obliged to withdraw from sea bream aquaculture. Kumamoto Pref. is located further from market sites and has an unsuitable nature environmental condition required for sea bream aquaculture. Although Kumamoto Pref. is trying to convert to the puffer fish aquaculture which requires different rearing techniques, aquaculture technique for puffer fish is not established yet.

  • PDF

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

Changes in blood pressure and determinants of blood pressure level and change in Korean adolescents (성장기 청소년의 혈압변화와 결정요인)

  • Suh, Il;Nam, Chung-Mo;Jee, Sun-Ha;Kim, Suk-Il;Kim, Young-Ok;Kim, Sung-Soon;Shim, Won-Heum;Kim, Chun-Bae;Lee, Kang-Hee;Ha, Jong-Won;Kang, Hyung-Gon;Oh, Kyung-Won
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.2 s.57
    • /
    • pp.308-326
    • /
    • 1997
  • Many studies have led to the notion that essential hypertension in adults is the result of a process that starts early in life: investigation of blood pressure(BP) in children and adolescents can therefore contribute to knowledge of the etiology of the condition. A unique longitudinal study on BP in Korea, known as Kangwha Children's Blood Pressure(KCBP) Study was initiated in 1986 to investigate changes in BP in children. This study is a part of the KCBP study. The purposes of this study are to show changes in BP and to determine factors affecting to BP level and change in Korean adolescents during age period 12 to 16 years. A total of 710 students(335 males, 375 females) who were in the first grade at junior high school(12 years old) in 1992 in Kangwha County, Korea have been followed to measure BP and related factors(anthropometric, serologic and dietary factors) annually up to 1996. A total of 562 students(242 males, 320 females) completed all five annual examinations. The main results are as follows: 1. For males, mean systolic and diastolic BP at age 12 and 16 years old were 108.7 mmHg and 118.1 mmHg(systolic), and 69.5 mmHg and 73.4 mmHg(diastolic), respectively. BP level was the highest when students were at 15 years old. For females, mean systolic and diastolic BP at age 12 and 16 years were 114.4 mmHg and 113.5 mmHg(systolic) and 75.2 mmHg and 72.1 mmHg(diastolic), respectively. BP level reached the highest point when they were 13-14 years old. 2. Anthropometric variables(height, weight and body mass index, etc) increased constantly during the study period for males. However, the rate of increase was decreased for females after age 15 years. Serum total cholesterol decreased and triglyceride increased according to age for males, but they did not show any significant trend fer females. Total fat intake increased at age 16 years compared with that at age 14 years. Compositions of carbohydrate, protein and fat among total energy intake were 66.2:12.0:19.4, 64.1:12.1:21.8 at age 14 and 16 years, respectively. 3. Most of anthropometric measures, especially, height, body mass index(BMI) and triceps skinfold thickness showed a significant correlation with BP level in both sexes. When BMI was adjusted, serum total cholesterol showed a significant negative correlation with systolic BP at age 12 years in males, but at age 14 years the direction of correlation changed to positive. In females serum total cholesterol was negatively correlated with diastolic BP at age 15 and 16 years. Triglyceride and creatinine showed positive correlation with systolic and diastolic BP in males, but they did not show any correlation in females. There was no consistent findings between nutrient intake and BP level. However, protein intake correlated positively with diastolic BP level in males. 4. Blood pressure change was positively associated with changes in BMI and serum total cholesterol in both sexes. Change in creatinine was associated with BP change positively in males and negatively in females. Students whose sodium intake was high showed higher systolic and diastolic BP in males, and students whose total fat intake was high maintained lower level of BP in females. The major determinants on BP change was BMI in both sexes.

  • PDF

Structure of Export Competition between Asian NIEs and Japan in the U.S. Import Market and Exchange Rate Effects (한국(韓國)의 아시아신흥공업국(新興工業國) 및 일본(日本)과의 대미수출경쟁(對美輸出競爭) : 환율효과(換率效果)를 중심(中心)으로)

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.3-49
    • /
    • 1990
  • This paper analyzes U.S. demand for imports from Asian NIEs and Japan, utilizing the Almost Ideal Demand System (AIDS) developed by Deaton and Muellbauer, with an emphasis on the effect of changes in the exchange rate. The empirical model assumes a two-stage budgeting process in which the first stage represents the allocation of total U.S. demand among three groups: the Asian NIEs and Japan, six Western developed countries, and the U.S. domestic non-tradables and import competing sector. The second stage represents the allocation of total U.S. imports from the Asian NIEs and Japan among them, by country. According to the AIDS model, the share equation for the Asia NIEs and Japan in U.S. nominal GNP is estimated as a single equation for the first stage. The share equations for those five countries in total U.S. imports are estimated as a system with the general demand restrictions of homogeneity, symmetry and adding-up, together with polynomially distributed lag restrictions. The negativity condition is also satisfied for all cases. The overall results of these complicated estimations, using quarterly data from the first quarter of 1972 to the fourth quarter of 1989, are quite promising in terms of the significance of individual estimators and other statistics. The conclusions drawn from the estimation results and the derived demand elasticities can be summarized as follows: First, the exports of each Asian NIE to the U.S. are competitive with (substitutes for) Japan's exports, while complementary to the exports of fellow NIEs, with the exception of the competitive relation between Hong Kong and Singapore. Second, the exports of each Asian NIE and of Japan to the U.S. are competitive with those of Western developed countries' to the U.S, while they are complementary to the U.S.' non-tradables and import-competing sector. Third, as far as both the first and second stages of budgeting are coneidered, the imports from each Asian NIE and Japan are luxuries in total U.S. consumption. However, when only the second budgeting stage is considered, the imports from Japan and Singapore are luxuries in U.S. imports from the NIEs and Japan, while those of Korea, Taiwan and Hong Kong are necessities. Fourth, the above results may be evidenced more concretely in their implied exchange rate effects. It appears that, in general, a change in the yen-dollar exchange rate will have at least as great an impact, on an NIE's share and volume of exports to the U.S. though in the opposite direction, as a change in the exchange rate of the NIE's own currency $vis-{\grave{a}}-vis$ the dollar. Asian NIEs, therefore, should counteract yen-dollar movements in order to stabilize their exports to the U.S.. More specifically, Korea should depreciate the value of the won relative to the dollar by approximately the same proportion as the depreciation rate of the yen $vis-{\grave{a}}-vis$ the dollar, in order to maintain the volume of Korean exports to the U.S.. In the worst case scenario, Korea should devalue the won by three times the maguitude of the yen's depreciation rate, in order to keep market share in the aforementioned five countries' total exports to the U.S.. Finally, this study provides additional information which may support empirical findings on the competitive relations among the Asian NIEs and Japan. The correlation matrices among the strutures of those five countries' exports to the U.S.. during the 1970s and 1980s were estimated, with the export structure constructed as the shares of each of the 29 industrial sectors' exports as defined by the 3 digit KSIC in total exports to the U.S. from each individual country. In general, the correlation between each of the four Asian NIEs and Japan, and that between Hong Kong and Singapore, are all far below .5, while the ones among the Asian NIEs themselves (except for the one between Hong Kong and Singapore) all greatly exceed .5. If there exists a tendency on the part of the U.S. to import goods in each specific sector from different countries in a relatively constant proportion, the export structures of those countries will probably exhibit a high correlation. To take this hypothesis to the extreme, if the U.S. maintained an absolutely fixed ratio between its imports from any two countries for each of the 29 sectors, the correlation between the export structures of these two countries would be perfect. Therefore, since any two goods purchased in a fixed proportion could be classified as close complements, a high correlation between export structures will imply a complementary relationship between them. Conversely, low correlation would imply a competitive relationship. According to this interpretation, the pattern formed by the correlation coefficients among the five countries' export structures to the U.S. are consistent with the empirical findings of the regression analysis.

  • PDF

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

Study of East Asia Climate Change for the Last Glacial Maximum Using Numerical Model (수치모델을 이용한 Last Glacial Maximum의 동아시아 기후변화 연구)

  • Kim, Seong-Joong;Park, Yoo-Min;Lee, Bang-Yong;Choi, Tae-Jin;Yoon, Young-Jun;Suk, Bong-Chool
    • The Korean Journal of Quaternary Research
    • /
    • v.20 no.1 s.26
    • /
    • pp.51-66
    • /
    • 2006
  • The climate of the last glacial maximum (LGM) in northeast Asia is simulated with an atmospheric general circulation model of NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. Modern climate is simulated by a prescribed sea surface temperature and sea ice provided from NCAR, and contemporary atmospheric CO2, topography, and orbital parameters, while LGM simulation was forced with the reconstructed CLIMAP sea surface temperatures, sea ice distribution, ice sheet topography, reduced $CO_2$, and orbital parameters. Under LGM conditions, surface temperature is markedly reduced in winter by more than $18^{\circ}C$ in the Korean west sea and continental margin of the Korean east sea, where the ocean exposed to land in the LGM, whereas in these areas surface temperature is warmer than present in summer by up to $2^{\circ}C$. This is due to the difference in heat capacity between ocean and land. Overall, in the LGM surface is cooled by $4{\sim}6^{\circ}C$ in northeast Asia land and by $7.1^{\circ}C$ in the entire area. An analysis of surface heat fluxes show that the surface cooling is due to the increase in outgoing longwave radiation associated with the reduced $CO_2$ concentration. The reduction in surface temperature leads to a weakening of the hydrological cycle. In winter, precipitation decreases largely in the southeastern part of Asia by about $1{\sim}4\;mm/day$, while in summer a larger reduction is found over China. Overall, annual-mean precipitation decreases by about 50% in the LGM. In northeast Asia, evaporation is also overall reduced in the LGM, but the reduction of precipitation is larger, eventually leading to a drier climate. The drier LGM climate simulated in this study is consistent with proxy evidence compiled in other areas. Overall, the high-resolution model captures the climate features reasonably well under global domain.

  • PDF