• Title/Summary/Keyword: Business management performance

Search Result 3,914, Processing Time 0.034 seconds

The Impacts of Entrepreneurial Proclivity and Merchandising Strategy on Conventional Market and Its Policy Implications (한국 재래시장상인의 창업가정신과 상품화 전략이 시장이미지와 경영성과에 미치는 영향과 재래시장 정책에 대한 시사점)

  • Suh, Geun-Ha;Yoon, Sung-Wook;Suh, Chang-Soo
    • Journal of Distribution Science
    • /
    • v.7 no.3
    • /
    • pp.71-100
    • /
    • 2009
  • The main purpose of this study is to define relevant factors that influence successful start-ups and management innovations of traditional markets from the point of market structures and relations. To do this, we devide an entrepreneurship of merchant into two factors, risk taking and managerial experience and choose product planning and its implementation to see merchandising of traditional markets. In this study we identify that several factors we chose are contributing to generating management performances through market promotional parameters. Also we confirm that image factors of traditional markets is consist of awareness and value of markets, and that these factors shows some sequential and continual patterns in the course of generating performances. In additions, it is identified that four independent factors have positive effects to star-up success; risk taking 0.29(t 2.61), managerial experience 0.04(t 1.79), merchandising implementation 0.374(t 2.61), market value 0.47(t 5.25), market awareness 0.22(t 2.30). This study can help merchants of traditional markets to make and change their market strategies, restructure their businesses and survive in the field. This also provide some ideas and guidances to relevant government agencies in formulating traditional market policies.

  • PDF

A study on Categorized type and range for the Aircraft and the LSA (우리나라 항공기 및 경량항공기의 종류 및 범위에 대한 법적 고찰)

  • Kim, Woong-Yi;Shin, Dai-Won
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.28 no.1
    • /
    • pp.55-71
    • /
    • 2013
  • By aircraft from Aviation regulations and institutional regulatory framework for ensuring the safety is secured. State-of-the-art aircraft, according to the type of development and diversification, modernization and new types of aircraft are operated. In particular, light aircraft and ultralight flying device such as the gyro-plane and unmanned flying devices is introduced a new device, and the device operates at these flight in accordance with the standards of the Aviation Act regulations may not occur often. Variety of light aircraft and ultra-light aircraft assembly, can be adapted for a person engaged in the business of aviation safety management and to perform the legal basis was established. Depending on the classification of newly introduced aircraft, the biggest change is the introduction of the concept of the LSA. In Korea, the various light aircraft are operating, but these aircraft range not clearly Aviation Regulations had difficulty in ensuring safety. This study examined the differences between international rules and regulations of Korea about the classification of aircraft. The LSA are included in aircraft categories internationally, but LSA will not be included in the aircraft categories, which is one of a range of powered flight device exists in Korea Aviation Act. Limit for maximum continuous power speed in a LSA, it is a limit on the right of the people who want using the high-performance plane. Also it is an international trend does not fit in, and is consistent with the intent of LSA manufacturer. Delete the content from a range of future aviation law revisions and light aircraft-related provisions to limit the maximum continuous power speed is considered to be suitable for the purpose of introducing the light aircraft industry. The laws and regulations set up in order to ensure the safety of ultralight aircraft categories existing in ultralight aircraft that exceeds the purpose of the introduction of LSA technology development at home and abroad, and is intended to reflect. These standards complement of aircraft operation is not appropriate for the situation unless the country is difficult to ensure the safety of operations. Also developed in other countries, the introduction of aircraft operating in the country, so many problems occur early revision is required.

  • PDF

In-House Subcontracting and Industrial Relations in Japanes Steel Industry (일본 철강산업의 사내하청과 노사관계)

  • Oh, Haksoo
    • Korean Journal of Labor Studies
    • /
    • v.24 no.1
    • /
    • pp.107-156
    • /
    • 2018
  • This article examines the history of the in - house subcontracting and the stabilization of labor - management relations in the steel industry in Japan. The ratio of in-house subcontract workers among steel workers has increased steadily until the mid-2000s, and about 70% in case of the largest company. In-house subcontracting was used as a strategy of the company to increase the quantity flexibility of employment and to save labor costs. The in-house subcontracting company needed company-specialized skills, and the internal labor market was formed because the rate of full-time workers was high and the turnover rate was low. The in-house subcontractor introduced long-term business relationship with the steel factory by introducing the equipment and materials necessary for the performance of the work, and the factory implemented the productivity improvement policy of the in-house subcontractor, and the win-win relationship between the factory and in-house subcontractor was developed. The trade union did not oppose the idea that the expansion of in-house subcontracting contributed to corporate profits, the stability of employment of the members and maintenance of their working conditions. Since 2000, the steel factory has pursued the transformation of in - house subcontractors into subsidiaries, which has been supported by capital relations. By the way, since the mid-2000s, there has been an increase in the number of regular workers' employment. The major factors are as follows: more strengthened compliance with laws and regulations, the higher quality request of customers, stricter keeping of deadlines, and problem in recruiting of workers at in-house subcontract companies. The wage gap between the factory and in - house subcontracting was less at company B than at company S, and the wage level of in - house subcontracting was about 90% of the factory at company B. The relatively small gap at company B seems to be due to the union's movement of narrowing the gap, low market dominance and unfavorable labor market. The internal labor market has been formed in the in-house subcontracting, and the wage gap is not large, and the possibility of labor disputes is low. Industrial relations are stable in the in-house subcontract company as well as the factory. The stabilization of labor-management relations in the steel industry in Korea is required to reduce the wage gap between the factory and in-house subcontract enterprises by raising productivity and expanding the internal labor market at in-house subcontract enterprises.

Membership Fluidity and Knowledge Collaboration in Virtual Communities: A Multilateral Approach to Membership Fluidity (가상 커뮤니티의 멤버 유동성과 지식 협업: 멤버 유동성에 대한 다각적 접근)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.19-47
    • /
    • 2015
  • In this era of knowledge economy, a variety of virtual communities are proliferating for the purpose of knowledge creation and utilization. Since the voluntary contributions of members are the essential source of knowledge, member turnover can have significant implications on the survival and success of virtual communities. However, there is a dearth of research on the effect of membership turnover and even the method of measurement for membership turnover is left unclear in virtual communities. In a traditional context, membership turnover is calculated as the ratio of the number of departing members to the average number of members for a given time period. In virtual communities, while the influx of newcomers can be clearly measured, the magnitude of departure is elusive since explicit withdrawals are seldom executed. In addition, there doesn't exist a common way to determine the average number of community members who return and contribute intermittently at will. This study initially examines the limitations in applying the concept of traditional turnover to virtual communities, and proposes five membership fluidity measures based on a preliminary analysis of editing behaviors of 2,978 featured articles in English Wikipedia. Subsequently, this work investigates the relationships between three selected membership fluidity measures and group collaboration performance, reflecting a moderating effect dependent on work characteristic. We obtained the following results: First, membership turnover relates to collaboration efficiency in a right-shortened U-shaped manner, with a moderating effect from work characteristic; given the same turnover rate, the promotion likelihood for a more professional task is lower than that for a less professional task, and the likelihood difference diminishes as the turnover rate increases. Second, contribution period relates to collaboration efficiency in a left-shortened U-shaped manner, with a moderating effect from work characteristic; the marginal performance change per unit change of contribution period is greater for a less professional task. Third, the number of new participants per month relates to collaboration efficiency in a left-shortened reversed U-shaped manner, for which the moderating effect from work characteristic appears to be insignificant.

A Mutual P3P Methodology for Privacy Preserving Context-Aware Systems Development (프라이버시 보호 상황인식 시스템 개발을 위한 쌍방향 P3P 방법론)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.145-162
    • /
    • 2008
  • One of the big concerns in e-society is privacy issue. In special, in developing robust ubiquitous smart space and corresponding services, user profile and preference are collected by the service providers. Privacy issue would be more critical in context-aware services simply because most of the context data themselves are private information: user's current location, current schedule, friends nearby and even her/his health data. To realize the potential of ubiquitous smart space, the systems embedded in the space should corporate personal privacy preferences. When the users invoke a set of services, they are asked to allow the service providers or smart space to make use of personal information which is related to privacy concerns. For this reason, the users unhappily provide the personal information or even deny to get served. On the other side, service provider needs personal information as rich as possible with minimal personal information to discern royal and trustworthy customers and those who are not. It would be desirable to enlarge the allowable personal information complying with the service provider's request, whereas minimizing service provider's requiring personal information which is not allowed to be submitted and user's submitting information which is of no value to the service provider. In special, if any personal information required by the service provider is not allowed, service will not be provided to the user. P3P (Platform for Privacy Preferences) has been regarded as one of the promising alternatives to preserve the personal information in the course of electronic transactions. However, P3P mainly focuses on preserving the buyers' personal information. From time to time, the service provider's business data should be protected from the unintended usage from the buyers. Moreover, even though the user's privacy preference could depend on the context happened to the user, legacy P3P does not handle the contextual change of privacy preferences. Hence, the purpose of this paper is to propose a mutual P3P-based negotiation mechanism. To do so, service provider's privacy concern is considered as well as the users'. User's privacy policy on the service provider's information also should be informed to the service providers before the service begins. Second, privacy policy is contextually designed according to the user's current context because the nomadic user's privacy concern structure may be altered contextually. Hence, the methodology includes mutual privacy policy and personalization. Overall framework of the mechanism and new code of ethics is described in section 2. Pervasive platform for mutual P3P considers user type and context field, which involves current activity, location, social context, objects nearby and physical environments. Our mutual P3P includes the privacy preference not only for the buyers but also the sellers, that is, service providers. Negotiation methodology for mutual P3P is proposed in section 3. Based on the fact that privacy concern occurs when there are needs for information access and at the same time those for information hiding. Our mechanism was implemented based on an actual shopping mall to increase the feasibility of the idea proposed in this paper. A shopping service is assumed as a context-aware service, and data groups for the service are enumerated. The privacy policy for each data group is represented as APPEL format. To examine the performance of the example service, in section 4, simulation approach is adopted in this paper. For the simulation, five data elements are considered: $\cdot$ UserID $\cdot$ User preference $\cdot$ Phone number $\cdot$ Home address $\cdot$ Product information $\cdot$ Service profile. For the negotiation, reputation is selected as a strategic value. Then the following cases are compared: $\cdot$ Legacy P3P is considered $\cdot$ Mutual P3P is considered without strategic value $\cdot$ Mutual P3P is considered with strategic value. The simulation results show that mutual P3P outperforms legacy P3P. Moreover, we could conclude that when mutual P3P is considered with strategic value, performance was better than that of mutual P3P is considered without strategic value in terms of service safety.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Relation of Social Security Network, Community Unity and Local Government Trust (지역사회 사회안전망구축과 지역사회결속 및 지방자치단체 신뢰의 관계)

  • Kim, Yeong-Nam;Kim, Chan-Sun
    • Korean Security Journal
    • /
    • no.42
    • /
    • pp.7-36
    • /
    • 2015
  • This study aims at analyzing difference of social Security network, Community unity and local government trust according to socio-demographical features, exploring the relation of social Security network, Community unity and local government trust according to socio-demographical features, presenting results between each variable as a model and verifying the property of mutual ones. This study sampled general citizens in Gwangju for about 15 days Aug. 15 through Aug. 30, 2014, distributed total 450 copies using cluster random sampling, gathered 438 persons, 412 persons of whom were used for analysis. This study verified the validity and credibility of the questionnaire through an experts' meeting, preliminary test, factor analysis and credibility analysis. The credibility of questionnaire was ${\alpha}=.809{\sim}{\alpha}=.890$. The inout data were analyzed by study purpose using SPSSWIN 18.0, as statistical techniques, factor analysis, credibility analysis, correlation analysis, independent sample t verification, ANOVA, multi-regression analysis, path analysis etc. were used. the findings obtained through the above study methods are as follows. First, building a social Security network has an effect on Community institution. That is, the more activated a, the higher awareness on institution. the more activated street CCTV facilities, anti-crime design, local government Security education, the higher the stability. Second, building a social Security network has an effect on trust of local government. That is, the activated local autonomous anti-crime activity, anti-crime design. local government's Security education, police public oder service, the more increased trust of policy, service management, busines performance. Third, Community unity has an effect on trust of local government. That is, the better Community institution is achieved, the higher trust of policy. Also the stabler Community institution, the higher trust of business performance. Fourth, building a social Security network has a direct or indirect effect on Community unity and local government trust. That is, social Security network has a direct effect on trust of local government, but it has a higher effect through Community unity of parameter. Such results showed that Community unity in Gwangju Region is an important factor, which means it is an important variable mediating building a social Security network and trust of local government. To win trust of local residents, we need to prepare for various cultural events and active communication space and build a social Security network for uniting them.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.