• Title/Summary/Keyword: Performance analysis model

Search Result 10,243, Processing Time 0.044 seconds

A Study on the Types and Characteristics of Tech Start-up Preparation of Middle-Aged Entrepreneurs (중장년 기술창업가의 창업 준비 유형 및 특성에 대한 연구)

  • Sungpyo, Hong;Minhee, Kim
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.125-140
    • /
    • 2023
  • Careful preparation for a start-up can lower the risk of failure and create a successful business model. However, there are still challenges for middle-aged entrepreneurs, as start-up services and policies are often not readily accessible or fully utilized. Despite active research on middle-aged start-ups, previous studies have not delved deeply into the demographics of start-up preparation and various preparation behaviors. In response to this, a study was conducted to identify which start-up support services middle-aged entrepreneurs use, and how start-up preparation can be classified based on this. Data from 324 middle-aged tech start-up owners, based in Seoul and who started their businesses within the past 7 years, was collected and analyzed. The results showed that middle-aged entrepreneurs had moderate start-up preparation, with the greatest focus on the preparation period and the least focus on start-up education. Latent Profile Analysis revealed three groups of start-up preparation types among middle-aged entrepreneurs: "Overall Tribal Type," "Lack of Start-up Education Type," and "Comprehensive Preparation Type." BCH was performed on start-up satisfaction, start-up competence, fear of failure, access to start-up services, and support needs for middle-aged entrepreneurs based on the preparation type. The results showed that "Overall Tribal Type" had statistically lower start-up satisfaction, competence, and service accessibility compared to the other groups. Meanwhile, "Comprehensive Preparation Type" had a statistically lower fear of failure than the other types. "Overall Tribal Type" also had lower accessibility to middle-aged start-up services. All types had a high recognition of the need for support for specialized middle-aged start-ups. The findings highlight the need for more comprehensive support for middle-aged entrepreneurs. This could include expanding support projects to enhance their level of preparation, providing customized support based on their level of preparation, and improving the visibility and accessibility of start-up support services for middle-aged individuals. Additionally, specialized education that addresses the characteristics of middle-aged individuals should be provided.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Estimation of Genetic Variations and Selection of Superior Lines from Diallel Crosses in Layer Chicken (산란계종의 잡종강세 이용을 위한 유전학적 기초연구와 우량교배조합 선발에 관한 연구)

  • 오봉국;한재용;손시환;박태진
    • Korean Journal of Poultry Science
    • /
    • v.13 no.1
    • /
    • pp.1-14
    • /
    • 1986
  • The subject of this study was to obtain some genetic information for developing superior layer chickens. Heterosis and combining ability effects were estimated with 5,759 progenies of full diallel crosses of 6 strains in White Leghorn. Fertility, hatchability, brooder-house viability, rearing- house viability, laying-house viability, age at 1st egg laying, body weight at 1st egg laying, average egg weight, hen-day egg production, hen-housed egg production, and feed conversion were investigated and analyzed into heterosis effect, general combining ability, specific combining ability and reciprocal effect by Grilling's model I. The results obtained were summarized as follows; 1. The general performance of each traits was 94.76% in fertility, 74.05% in hatchability, 97.47% in brooder-house viability, 99.72% in rearing-house viability, 93.81% in laying-house viability, 150 day in the age at 1st egg laying, 1,505g in the body weight at 1st egg laying, 60.08g in average egg weight, 77.11% in hen-day egg production, 269.8 eggs in hen-housed egg Production, and 2.44 in feed conversion. 2. The heterosis effects were estimated to -0.66%, 9.58%, 0.26%, 1.83%, -3.87%, 3.63%, 0.96%, 4.23%, 6.4%, and -0.8%, in fertility, hatchability, brooder-house viability, laying-house viability, the age at 1st egg laying, the body weight at 1st egg laying, average egg weight, hen-day egg Production, hen-housed egg production and feed conversion, respectively. 3. The results obtained from analysis of combining ability were as follows ; 1) Estimates of general combining ability, specific combining ability and reciprocal effects were not high in fertility. It was considered that fertility was mainly affected by environmental factors. In the hatchability, the general combining ability was more important than specific combining ability and reciprocal effects, and the superior strains were K and V which the additive genetic effects were very high. 2) In the brooder-house viability and laying-house viability, specific combining ability and reciprocal effects appeared to be important and the combinations of K${\times}$A and A${\times}$K were very superior. 3) In the feed conversion and average egg weight, general combining ability was more important compared with specific combining ability and reciprocal effects. On the basis of combining ability the superior strains were F, K and B in feed conversion, F and B in the average egg weight. 4) General combining ability, specific combining ability and reciprocal effects were important in the age at 1st egg laying and the combination of V ${\times}$F, F${\times}$K and B${\times}$F were very useful on the basis of these effects. In the body weight at 1st egg laying, general combining ability was more important than specific combining ability and reciprocal effects, relatively. The K, F and E strains were recommended to develop the light strain in the body weight at 1st egg laying. 5) General combining ability, specific combining ability and reciprocal effects were important in the hen-day egg production and hen-housed egg production. The combinations of F${\times}$K, A${\times}$K, and K${\times}$A were proper for developing these traits. 4. In general, high general combining ability effects were estimated for hatchability, body weight at 1st egg laying, average egg weight, hen-day egg production, hen-housed egg production, and feed conversion and high specific combining ability effects for brooder-house viability, laying house viability, age at 1st egg laying, hen-day egg production and hen-housed egg production, and high reciprocal effects for the age at 1st egg laying.

  • PDF

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

    • Ka, Hoi-Kwang;Kim, Jin-soo
      • Asia pacific journal of information systems
      • /
      • v.24 no.4
      • /
      • pp.443-472
      • /
      • 2014
    • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

    An Empirical Study in Relationship between Franchisor's Leadership Behavior Style and Commitment by Focusing Moderating Effect of Franchisee's Self-efficacy (가맹본부의 리더십 행동유형과 가맹사업자의 관계결속에 관한 실증적 연구 - 가맹사업자의 자기효능감의 조절효과를 중심으로 -)

    • Yang, Hoe-Chang;Lee, Young-Chul
      • Journal of Distribution Research
      • /
      • v.15 no.1
      • /
      • pp.49-71
      • /
      • 2010
    • Franchise businesses in South Korea have contributed to economic growth and job creation, and its growth potential remains very high. However, despite such virtues, domestic franchise businesses face many problems such as the instability of franchisor's business structure and weak financial conditions. To solve these problems, the government enacted legislation and strengthened franchise related laws. However, the strengthening of laws regulating franchisors had many side effects that interrupted the development of the franchise business. For example, legal regulations regarding franchisors have had the effect of suppressing the franchisor's leadership activities (e.g. activities such as the ability to advocate the franchisor's policies and strategies to the franchisees, in order to facilitate change and innovation). One of the main goals of the franchise business is to build cooperation between the franchisor and the franchisee for their combined success. However, franchisees can refuse to follow the franchisor's strategies because of the current state of franchise-related law and government policy. The purpose of this study to explore the effects of franchisor's leadership style on franchisee's commitment in a franchise system. We classified leadership styles according to the path-goal theory (House & Mitchell, 1974), and it was hypothesized and tested that the four leadership styles proposed by the path-goal theory (i.e. directive, supportive, participative and achievement-oriented leadership) have different effects on franchisee's commitment. Another purpose of this study to explore the how the level of franchisee's self-efficacy influences both the franchisor's leadership style and franchisee's commitment in a franchise system. Results of the present study are expected to provide important theoretical and practical implications as to the role of franchisor's leadership style, as restricted by government regulations and the franchisee's self-efficacy, which could be needed to improve the quality of the long-term relationship between the franchisor and franchisee. Quoted by Northouse(2007), one problem regarding the investigation of leadership is that there are almost as many different definitions of leadership as there are people who have tried to define it. But despite the multitude of ways in which leadership has been conceptualized, the following components can be identified as central to the phenomenon: (a) leadership is a process, (b) leadership involves influence, (c) leadership occurs in a group context, and (d) leadership involves goal attainment. Based on these components, in this study leadership is defined as a process whereby franchisor's influences a group of franchisee' to achieve a common goal. Focusing on this definition, the path-goal theory is about how leaders motivate subordinates to accomplish designated goals. Drawing heavily from research on what motivates employees, path-goal theory first appeared in the leadership literature in the early 1970s in the works of Evans (1970), House (1971), House and Dessler (1974), and House and Mitchell (1974). The stated goal of this leadership theory is to enhance employee performance and employee satisfaction by focusing on employee motivation. In brief, path-goal theory is designed to explain how leaders can help subordinates along the path to their goals by selecting specific behaviors that are best suited to subordinates' needs and to the situation in which subordinates are working (Northouse, 2007). House & Mitchell(1974) predicted that although many different leadership behaviors could have been selected to be a part of path-goal theory, this approach has so far examined directive, supportive, participative, and achievement-oriented leadership behaviors. And they suggested that leaders may exhibit any or all of these four styles with various subordinates and in different situations. However, due to restrictive government regulations, franchisors are not in a position to change their leadership style to suit their circumstances. In addition, quoted by Northouse(2007), ssubordinate characteristics determine how a leader's behavior is interpreted by subordinates in a given work context. Many researchers have focused on subordinates' needs for affiliation, preferences for structure, desires for control, and self-perceived level of task ability. In this study, we have focused on the self-perceived level of task ability, namely, the franchisee's self-efficacy. According to Bandura (1977), self-efficacy is chiefly defined as the personal attitude of one's ability to accomplish concrete tasks. Therefore, it is not an indicator of one's actual abilities, but an opinion of the extent of how one can use that ability. Thus, the judgment of maintain franchisee's commitment depends on the situation (e.g., government regulation and policy and leadership style of franchisor) and how it affects one's ability to mobilize resources to deal with the task, so even if people possess the same ability, there may be differences in self-efficacy. Figure 1 illustrates the model investigated in this study. In this model, it was hypothesized that leadership styles would affect the franchisee's commitment, and self-efficacy would moderate the relationship between leadership style and franchisee's commitment. Theoretically, quoted by Northouse(2007), the path-goal approach suggests that leaders need to choose a leadership style that best fits the needs of subordinates and the work they are doing. According to House & Mitchell (1974), the theory predicts that a directive style of leadership is best in situations in which subordinates are dogmatic and authoritarian, the task demands are ambiguous, and the organizational rule and procedures are unclear. In these situations, franchisor's directive leadership complements the work by providing guidance and psychological structure for franchisees. For work that is structured, unsatisfying, or frustrating, path-goal theory suggests that leaders should use a supportive style. Franchisor's Supportive leadership offers a sense of human touch for franchisees engaged in mundane, mechanized activity. Franchisor's participative leadership is considered best when a task is ambiguous because participation gives greater clarity to how certain paths lead to certain goals; it helps subordinates learn what actions leads to what outcome. Furthermore, House & Mitchell(1974) predicts that achievement-oriented leadership is most effective in settings in which subordinates are required to perform ambiguous tasks. Marsh and O'Neill (1984) tested the idea that organizational members' anger and decline in performance is caused by deficiencies in their level of effort and found that self-efficacy promotes accomplishment, decreases stress and negative consequences like depression and emotional instability. Based on the extant empirical findings and theoretical reasoning, we posit positive and strong relationships between the franchisor's leadership styles and the franchisee's commitment. Furthermore, the level of franchisee's self-efficacy was thought to maintain their commitment. The questionnaires sent to participants consisted of the following measures; leadership style was assessed using a 20 item 7-point likert scale developed by Indvik (1985), self-efficacy was assessed using a 24 item 6-point likert scale developed by Bandura (1977), and commitment was assessed using a 6 item 5-point likert scale developed by Morgan & Hunt (1994). Questionnaires were distributed to Korean optical franchisees in Seoul. It took about 20 days to complete the data collection. A total number of 140 questionnaires were returned and complete data were available from 137 respondents. Results of multiple regression analyses testing the relationships between the each of the four styles of leadership shown by the franchisor as independent variables and franchisee's commitment as the dependent variable showed that the relationship between supportive leadership style and commitment ($\beta$=.13, p<.001),and the relationship between participative leadership style and commitment ($\beta$=.07, p<.001)were significant. However, when participants divided into high and low self-efficacy groups, results of multiple regression analyses showed that only the relationship between achievement-oriented leadership style and commitment ($\beta$=.14, p<.001) was significant in the high self-efficacy group. In the low self-efficacy group, the relationship between supportive leadership style and commitment ($\beta$=.17, p<.001),and the relationship between participative leadership style and commitment ($\beta$=.10, p<.001) were significant. The study focused on the franchisee's self-efficacy in order to explore the possibility that regulation, originally intended to protect the franchisee, may not be the most effective method to maintain the relationships in a franchise business. The key results of the data analysis regarding the moderating role of self-efficacy between leadership behavior style as proposed by path-goal and commitment theory were as follows. First, this study proposed that franchisor should apply the appropriate type of leadership behavior to strengthen the franchisees commitment because the results demonstrated that supportive and participative leadership styles by the franchisors have a positive influence on the franchisee's level of commitment. Second, it is desirable for franchisor to validate the franchisee's efforts, since the franchisee's characteristics such as self-efficacy had a substantial, positive effect on the franchisee's commitment as well as being a meaningful moderator between leadership and commitment. Third, the results as a whole imply that the government should provide institutional support, namely to put the franchisor in a position to clearly identify the characteristics of their franchisees and provide reasonable means to administer the franchisees to achieve the company's goal.

    • PDF

    Home Economics teachers' concern on creativity and personality education in Home Economics classes: Based on the concerns based adoption model(CBAM) (가정과 교사의 창의.인성 교육에 대한 관심과 실행에 대한 인식 - CBAM 모형에 기초하여-)

    • Lee, In-Sook;Park, Mi-Jeong;Chae, Jung-Hyun
      • Journal of Korean Home Economics Education Association
      • /
      • v.24 no.2
      • /
      • pp.117-134
      • /
      • 2012
    • The purpose of this study was to identify the stage of concern, the level of use, and the innovation configuration of Home Economics teachers regarding creativity and personality education in Home Economics(HE) classes. The survey questionnaires were sent through mails and e-mails to middle-school HE teachers in the whole country selected by systematic sampling and convenience sampling. Questionnaires of the stages of concern and the levels of use developed by Hall(1987) were used in this study. 187 data were used for the final analysis by using SPSS/window(12.0) program. The results of the study were as following: First, for the stage of concerns of HE teachers on creativity and personality education, the information stage of concerns(85.51) was the one with the highest response rate and the next high in the following order: the management stage of concerns(81.88), the awareness stage of concerns(82.15), the refocusing stage of concerns(68.80), the collaboration stage of concerns(61.97), and the consequence stage of concerns(59.76). Second, the levels of use of HE teachers on creativity and personality education was highest with the mechanical levels(level 3; 21.4%) and the next high in the following order: the orientation levels of use(level 1; 20.9%), the refinement levels(level 5; 17.1%), the non-use levels(level 0; 15.0%), the preparation levels(level 2; 10.2%), the integration levels(level 6; 5.9%), the renewal levels(level 7; 4.8%), the routine levels(level 4; 4.8%). Third, for the innovation configuration of HE teachers on creativity and personality education, more than half of the HE teachers(56.1%) mainly focused on personality education in their HE classes; 31.0% of the HE teachers performed both creativity and personality education; a small number of teachers(6.4%) focused on creativity education; the same number of teachers(6.4%) responded that they do not focus on neither of the two. Examining the level and type of performance HE teachers applied, the average score on the performance of creativity and personality education was 3.76 out of 5.00 and the mean of creativity component was 3.59 and of personality component was 3.94, higher than standard. For the creativity education, openness/sensitivity(3.97) education was performed most and the next most in the following order: problem-solving skill(3.79), curiosity/interest(3.73), critical thinking(3.63), problem-finding skill(3.61), originality(3.57), analogy(3.47), fluency/adaptability(3.46), precision(3.46), imagination(3.37), and focus/sympathy(3.37). For the personality education, the following components were performed in order from most to least: power of execution(4.07), cooperation/consideration/just(4.06), self-management skill(4.04), civic consciousness(4.04), career development ability(4.03), environment adaptability(3.95), responsibility/ownership(3.94), decision making(3.89), trust/honesty/promise(3.88), autonomy(3.86), and global competency(3.55). Regarding what makes performing creativity and personality education difficult, most HE teachers(64.71%) chose the lack of instructional materials and 40.11% of participants chose the lack of seminar and workshop opportunity. 38.5% chose the difficulty of developing an evaluation criteria or an evaluation tool while 25.67% responded that they do not know any means of performing creativity and personality education. Regarding the better way to support for creativity and personality education, the HE teachers chose in order from most to least: 'expansion of hands-on activities for students related to education on creativity and personality'(4.34), 'development of HE classroom culture putting emphasis on creativity and personality'(4.29), 'a proper curriculum on creativity and personality education that goes along with students' developmental stages'(4.27), 'securing enough human resource and number of professors who will conduct creativity and personality education'(4.21), 'establishment of the concept and value of the education on creativity and personality'(4.09), and 'educational promotion on creativity and personality education supported by local communities and companies'(3.94).

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.