• Title/Summary/Keyword: administration errors

Search Result 249, Processing Time 0.027 seconds

Effects of Environmental Conditions on Vegetation Indices from Multispectral Images: A Review

  • Md Asrakul Haque;Md Nasim Reza;Mohammod Ali;Md Rejaul Karim;Shahriar Ahmed;Kyung-Do Lee;Young Ho Khang;Sun-Ok Chung
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.319-341
    • /
    • 2024
  • The utilization of multispectral imaging systems (MIS) in remote sensing has become crucial for large-scale agricultural operations, particularly for diagnosing plant health, monitoring crop growth, and estimating plant phenotypic traits through vegetation indices (VIs). However, environmental factors can significantly affect the accuracy of multispectral reflectance data, leading to potential errors in VIs and crop status assessments. This paper reviewed the complex interactions between environmental conditions and multispectral sensors emphasizing the importance of accounting for these factors to enhance the reliability of reflectance data in agricultural applications.An overview of the fundamentals of multispectral sensors and the operational principles behind vegetation index (VI) computation was reviewed. The review highlights the impact of environmental conditions, particularly solar zenith angle (SZA), on reflectance data quality. Higher SZA values increase cloud optical thickness and droplet concentration by 40-70%, affecting reflectance in the red (-0.01 to 0.02) and near-infrared (NIR) bands (-0.03 to 0.06), crucial for VI accuracy. An SZA of 45° is optimal for data collection, while atmospheric conditions, such as water vapor and aerosols, greatly influence reflectance data, affecting forest biomass estimates and agricultural assessments. During the COVID-19 lockdown,reduced atmospheric interference improved the accuracy of satellite image reflectance consistency. The NIR/Red edge ratio and water index emerged as the most stable indices, providing consistent measurements across different lighting conditions. Additionally, a simulated environment demonstrated that MIS surface reflectance can vary 10-20% with changes in aerosol optical thickness, 15-30% with water vapor levels, and up to 25% in NIR reflectance due to high wind speeds. Seasonal factors like temperature and humidity can cause up to a 15% change, highlighting the complexity of environmental impacts on remote sensing data. This review indicated the importance of precisely managing environmental factors to maintain the integrity of VIs calculations. Explaining the relationship between environmental variables and multispectral sensors offers valuable insights for optimizing the accuracy and reliability of remote sensing data in various agricultural applications.

A Study on the Construal Level and Intention of Autonomous Driving Taxi According to Message Framing (해석수준과 메시지 프레이밍에 따른 자율주행택시의 사용의도에 관한 연구)

  • Yoon, Seong Jeong;Kim, Min Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.135-155
    • /
    • 2018
  • The purpose of this study is to analyze the difference of interpretation level and intention to use message framing when autonomous vehicle, which is emerging as the product of 4th industrial revolution, is used as taxi, Interpretation level refers to the interpretation of a product or service, assuming that it will happen in the near future or in the distant future. Message framing refers to the formation of positive or negative expressions or messages at the extremes of benefits and losses. In other words, previous studies interpret the value of a product or service differently according to these two concepts. The purpose of this study is to investigate whether there are differences in intention to use when two concepts are applied when an autonomous vehicle is launched as a taxi. The results are summarized as follows: First, the message format explaining the gain and why should be used when using the autonomous taxi in the message framing configuration, and the loss and how when the autonomous taxi is not used. Messages were constructed and compared. The two message framing differed (t = 3.063), and the message type describing the benefits and reasons showed a higher intention to use. In addition, the results according to interpretation level are summarized as follows. There was a difference in intentions to use when assuming that it would occur in the near future and in the near future with respect to the gain and loss, Respectively. In summary, in order to increase the intention of using autonomous taxis, it is concluded that messages should be given to people assuming positive messages (Gain) and what can happen in the distant future. In addition, this study will be able to utilize the research method in studying intention to use new technology. However, this study has the following limitations. First, it assumes message framing and time without user experience of autonomous taxi. This will be different from the actual experience of using an autonomous taxi in the future. Second, self-driving cars should technical progress is continuing, but laws and institutions must be established in order to commercialize it and build the infrastructure to operate the autonomous car. Considering this fact, the results of this study can not reflect a more realistic aspect. However, there is a practical limit to search for users with sufficient experience in new technologies such as autonomous vehicles. In fact, although the autonomous car to take advantage of the public transportation by taxi is now ready for the road infrastructure, and technical and legal public may not be willing to choose to not have enough knowledge to use the Autonomous cab. Therefore, the main purpose of this study is that by assuming that autonomous cars will be commercialized by taxi you can do to take advantage of the autonomous car, it is necessary to frame the message, why can most effectively be used to find how to deliver. In addition, the research methodology should be improved and future research should be done as follows. First, most students responded in this study. It is also true that it is difficult to generalize the hypotheses to be tested in this study. Therefore, in future studies, it would be reasonable to investigate the population of various distribution considering the age, area, occupation, education level, etc. Where autonomous taxi can be used rather than those who can drive. Second, it is desirable to construct various message framing of the questionnaire, but it is necessary to learn various message framing in advance and to prevent errors in response to the next message framing. Therefore, it is desirable to measure the message framing with a certain amount of time when the questionnaire is designed.

Survey of Sedation Practices by Pediatric Dentists (소아치과의사의 진정법 사용에 대한 실태조사)

  • Yang, Yeonmi;Shin, Teojeon;Yoo, Seunghoon;Choi, Seongchul;Kim, Jiyeon;Jeong, Taesung
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.41 no.3
    • /
    • pp.257-265
    • /
    • 2014
  • The aim of this study was to establish the appropriate guidelines in the sedation techniques and to organize the continuing education programs for the sedation in future under the direction of Committee on Sedation, Education and Research under the Korean Academy of Pediatric Dentistry(KAPD). The surveys on the sedation technique were performed on 111 organizations which practices the sedation and responded to the survey via online and e-mail by February 2014. The collected survey were analyzed. The purpose of sedation was mainly to manage the children's behavior and its uses were primarily on 3~4 years old children. The most frequent duration of treatment was 1~2 hours to treat both maxillary and mandible. The preferred dosages of sedative drugs were chloral hydrate(CH) 50~70 mg/kg, hydorxyzine(Hx) 1~2 mg/kg, and intramuscular midazolam(Mida IM) 0.1~0.2 mg/kg. The preferred combination of the sedative drugs were CH + Hx + $N_2O/O_2$(67.6%), CH + Hx + Mida submucosal administration (SM) + $N_2O/O_2$(29.7%), and Mida IM + $N_2O/O_2$(23.4%). The administration of additional sedatives was carried out at 48%, mainly using Midazolam. 87.5% of the respondents experienced the adverse effects of the sedation such as vomiting/retching, agitation during recovery, subclinical respiratory depression, staggering, and etc. Among them, only 20% periodically retrain the emergency management protocol. About the discharge criteria for patients after the sedation, the respondents either showed a lack of clear criteria or did not follow the recommended discharge criteria. 86% of the respondents expressed the interests in taking a course on the sedation and they wanted to learn mostly about the sedation-related emergency management, the safe dosage of the sedative drugs, and etc. The use of sedation in pediatric dentistry must be consider a patient's safety as top priority and each dentist must show the evidence of sound practices for the prevention of any possible medical errors. Therefore, KAPD must establish the proper sedation guidelines and it needs to provide the systematic technical training program of sedation-related emergency management for pediatric dentists.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

A Study on Kiosk Satisfaction Level Improvement: Focusing on Kano, Timko, and PCSI Methodology (키오스크 소비자의 만족수준 연구: Kano, Timko, PCSI 방법론을 중심으로)

  • Choi, Jaehoon;Kim, Pansoo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.4
    • /
    • pp.193-204
    • /
    • 2022
  • This study analyzed the degree of influence of measurement and improvement of customer satisfaction level targeting kiosk users. In modern times, due to the development of technology and the improvement of the online environment, the probability that simple labor tasks will disappear after 10 years is close to 90%. Even in domestic research, it is predicted that 'simple labor jobs' will disappear due to the influence of advanced technology with a probability of about 36%. there is. In particular, as the demand for non-face-to-face services increases due to the Corona 19 virus, which is recently spreading globally, the trend of introducing kiosks has accelerated, and the global market will grow to 83.5 billion won in 2021, showing an average annual growth rate of 8.9%. there is. However, due to the unmanned nature of these kiosks, some consumers still have difficulties in using them, and consumers who are not familiar with the use of these technologies have a negative attitude towards service co-producers due to rejection of non-face-to-face services and anxiety about service errors. Lack of understanding leads to role conflicts between sales clerks and consumers, or inequality is being created in terms of service provision and generations accustomed to using technology. In addition, since kiosk is a representative technology-based self-service industry, if the user feels uncomfortable or requires additional labor, the overall service value decreases and the growth of the kiosk industry itself can be suppressed. It is important. Therefore, interviews were conducted on the main points of direct use with actual users centered on display color scheme, text size, device design, device size, internal UI (interface), amount of information, recognition sensor (barcode, NFC, etc.), Display brightness, self-event, and reaction speed items were extracted. Afterwards, using the questionnaire, the Kano model quality attribute classification of each expected evaluation item was carried out, and Timko's customer satisfaction coefficient, which can be calculated with accurate numerical values The PCSI Index analysis was additionally performed to determine the improvement priorities by finally classifying the improvement impact of the kiosk expected evaluation items through research. As a result, the impact of improvement appears in the order of internal UI (interface), text size, recognition sensor (barcode, NFC, etc.), reaction speed, self-event, display brightness, amount of information, device size, device design, and display color scheme. Through this, we intend to contribute to a comprehensive comparison of kiosk-based research in each field and to set the direction for improvement in the venture industry.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

An Exploratory Study on the Competition Patterns Between Internet Sites in Korea (한국 인터넷사이트들의 산업별 경쟁유형에 대한 탐색적 연구)

  • Park, Yoonseo;Kim, Yongsik
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.79-111
    • /
    • 2011
  • Digital economy has grown rapidly so that the new business area called 'Internet business' has been dramatically extended as time goes on. However, in the case of Internet business, market shares of individual companies seem to fluctuate very extremely. Thus marketing managers who operate the Internet sites have seriously observed the competition structure of the Internet business market and carefully analyzed the competitors' behavior in order to achieve their own business goals in the market. The newly created Internet business might differ from the offline ones in management styles, because it has totally different business circumstances when compared with the existing offline businesses. Thus, there should be a lot of researches for finding the solutions about what the features of Internet business are and how the management style of those Internet business companies should be changed. Most marketing literatures related to the Internet business have focused on individual business markets. Specifically, many researchers have studied the Internet portal sites and the Internet shopping mall sites, which are the most general forms of Internet business. On the other hand, this study focuses on the entire Internet business industry to understand the competitive circumstance of online market. This approach makes it possible not only to have a broader view to comprehend overall e-business industry, but also to understand the differences in competition structures among Internet business markets. We used time-series data of Internet connection rates by consumers as the basic data to figure out the competition patterns in the Internet business markets. Specifically, the data for this research was obtained from one of Internet ranking sites, 'Fian'. The Internet business ranking data is obtained based on web surfing record of some pre-selected sample group where the possibility of double-count for page-views is controlled by method of same IP check. The ranking site offers several data which are very useful for comparison and analysis of competitive sites. The Fian site divides the Internet business areas into 34 area and offers market shares of big 5 sites which are on high rank in each category daily. We collected the daily market share data about Internet sites on each area from April 22, 2008 to August 5, 2008, where some errors of data was found and 30 business area data were finally used for our research after the data purification. This study performed several empirical analyses in focusing on market shares of each site to understand the competition among sites in Internet business of Korea. We tried to perform more statistically precise analysis for looking into business fields with similar competitive structures by applying the cluster analysis to the data. The research results are as follows. First, the leading sites in each area were classified into three groups based on averages and standard deviations of daily market shares. The first group includes the sites with the lowest market shares, which give more increased convenience to consumers by offering the Internet sites as complimentary services for existing offline services. The second group includes sites with medium level of market shares, where the site users are limited to specific small group. The third group includes sites with the highest market shares, which usually require online registration in advance and have difficulty in switching to another site. Second, we analyzed the second place sites in each business area because it may help us understand the competitive power of the strongest competitor against the leading site. The second place sites in each business area were classified into four groups based on averages and standard deviations of daily market shares. The four groups are the sites showing consistent inferiority compared to the leading sites, the sites with relatively high volatility and medium level of shares, the sites with relatively low volatility and medium level of shares, the sites with relatively low volatility and high level of shares whose gaps are not big compared to the leading sites. Except 'web agency' area, these second place sites show relatively stable shares below 0.1 point of standard deviation. Third, we also classified the types of relative strength between leading sites and the second place sites by applying the cluster analysis to the gap values of market shares between two sites. They were also classified into four groups, the sites with the relatively lowest gaps even though the values of standard deviation are various, the sites with under the average level of gaps, the sites with over the average level of gaps, the sites with the relatively higher gaps and lower volatility. Then we also found that while the areas with relatively bigger gap values usually have smaller standard deviation values, the areas with very small differences between the first and the second sites have a wider range of standard deviation values. The practical and theoretical implications of this study are as follows. First, the result of this study might provide the current market participants with the useful information to understand the competitive circumstance of the market and build the effective new business strategy for the market success. Also it might be useful to help new potential companies find a new business area and set up successful competitive strategies. Second, it might help Internet marketing researchers take a macro view of the overall Internet market so that make possible to begin the new studies on overall Internet market beyond individual Internet market studies.

  • PDF

Analysis of Oceanic Current Maps of the East Sea in the Secondary School Science Textbooks (중등 과학 교과서의 동해 해류도 분석)

  • Park, Kyung-Ae;Park, Ji-Eun;Seo, Kang-Sun;Choi, Byoung-Ju;Byun, Do-Seong
    • Journal of the Korean earth science society
    • /
    • v.32 no.7
    • /
    • pp.832-859
    • /
    • 2011
  • The importance of scientific education on accurate oceanic currents and circulation has been increasingly addressed because the currents have played a significant role in climate change and global energy balance. The objectives of this study are to analyze errors of the oceanic current maps in the textbooks, to discuss a variety of error sources, to suggest how to produce a unified oceanic current map of the East Sea for the students. Twenty-seven textbooks based on the 7th National Curriculum were analyzed and quantitatively investigated on the characteristics of the current maps by comparing with both the previous literature and up-to-date scientific knowledge. All the maps in the textbooks with different mappings were converted to digitalized image data with Mercator mapping using geolocation information. Detailed analysis were performed to investigate the patterns of the Tsushima Warm Current (TWC) in the Korea Strait, to examine how closely the nearshore branch of the TWC flows along the Japanese coast, to scrutinize the features of the offshore branch of the TWC south of the subpolar front in the East Sea, to quantitatively investigate the northern range of the northward-propagating East Korea Warm Current and its latitude turning to the east, and lastly to examine the outflow of the TWC near the Tsugaru Strait and the Soya Strait. In addition, the origins, southern limits, and distances from the coast of the Liman Current and the North Korea Cold Current were analyzed. Other erroneous expressions of the currents in the textbooks were presented. These analyses revealed the problems in the present current maps of the textbooks, which might lead the students to misconception. This study also addressed a necessity in a bridge between scientists with up-to-date scientific results and educators who needed educational materials.

A Study on the Sensitibities of Cashflow and Growth Opportunities to Investments (기업투자와 성장기회, 현금흐름의 민감도에 관한 실증연구)

  • Lee, Won-Heum
    • The Korean Journal of Financial Management
    • /
    • v.24 no.2
    • /
    • pp.1-40
    • /
    • 2007
  • We test a model of investment-cashflow-growth opportunities relationship in order to estimate the sensitivities to investments. In this study, we use a new proxy variable for the value of growth opportunities(hereafter "VGO"), which is based on the seminal papers of M&M(1958:1961:1963) and Lee(2006;2007). The empirical findings on the sensitivities of cashflow and growth opportunities are as follows. First, when the traditional proxy variables for the growth opportunities such as Tobin's Q, MBR and sales growth are included with the new proxy VGO in the estimation, their coefficients are turned out to be insignificant. Second, only the new proxy variable VGO shows a statistically significant positive sensitibity to investment, which can be regarded that the growth opportunities hold the positive influences to investments. Third, the Tobin's Q can be decomposed into three factors such as the value of growth opportunities(VGO), the value of asset-in-place and valuation errors. It turns out that only the VGO shows a statistically significant positive relationship with investment among others. This means that the new variable VGO is a good proxy variable for the growth opportunities in the investment-cashflow sensitivity analysis. In sum, thanks to the above findings in this study, we can say that it will not be proper to choose a proxy variable for the growth opportunities from the traditional set of proxies such as Tobin's Q, MBR, or sales growth rate.

  • PDF