• Title/Summary/Keyword: Additional Cost

Search Result 1,480, Processing Time 0.034 seconds

Analysis of Munitions Contract Work Using Process Mining (프로세스 마이닝을 이용한 군수품 계약업무 분석 : 공군 군수사 계약업무를 중심으로)

  • Joo, Yong Seon;Kim, Su Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.41-59
    • /
    • 2022
  • The timely procurement of military supplies is essential to maintain the military's operational capabilities, and contract work is the first step toward timely procurement. In addition, rapid signing of a contract enables consumers to set a leisurely delivery date and increases the possibility of budget execution, so it is essential to improve the contract process to prevent early execution of the budget and transfer or disuse. Recently, research using big data has been actively conducted in various fields, and process analysis using big data and process mining, an improvement technique, are also widely used in the private sector. However, the analysis of contract work in the military is limited to the level of individual analysis such as identifying the cause of each problem case of budget transfer and disuse contracts using the experience and fragmentary information of the person in charge. In order to improve the contract process, this study analyzed using the process mining technique with data on a total of 560 contract tasks directly contracted by the Department of Finance of the Air Force Logistics Command for about one year from November 2019. Process maps were derived by synthesizing distributed data, and process flow, execution time analysis, bottleneck analysis, and additional detailed analysis were conducted. As a result of the analysis, it was found that review/modification occurred repeatedly after request in a number of contracts. Repeated reviews/modifications have a significant impact on the delay in the number of days to complete the cost calculation, which has also been clearly revealed through bottleneck visualization. Review/modification occurs in more than 60% of the top 5 departments with many contract requests, and it usually occurs in the first half of the year when requests are concentrated, which means that a thorough review is required before requesting contracts from the required departments. In addition, the contract work of the Department of Finance was carried out in accordance with the procedures according to laws and regulations, but it was found that it was necessary to adjust the order of some tasks. This study is the first case of using process mining for the analysis of contract work in the military. Based on this, if further research is conducted to apply process mining to various tasks in the military, it is expected that the efficiency of various tasks can be derived.

An Empirical Study on the Improvement of In Situ Soil Remediation Using Plasma Blasting, Pneumatic Fracturing and Vacuum Suction (플라즈마 블라스팅, 공압파쇄, 진공추출이 활용된 지중 토양정화공법의 정화 개선 효과에 대한 실증연구)

  • Jae-Yong Song;Geun-Chun Lee;Cha-Won Kang;Eun-Sup Kim;Hyun-Shic Jang;Bo-An Jang;Yu-Chul Park
    • The Journal of Engineering Geology
    • /
    • v.33 no.1
    • /
    • pp.85-103
    • /
    • 2023
  • The in-situ remediation of a solidified stratum containing a large amount of fine-texture material like clay or organic matter in contaminated soil faces limitations such as increased remediation cost resulting from decreased purification efficiency. Even if the soil conditions are good, remediation generally requires a long time to complete because of non-uniform soil properties and low permeability. This study assessed the remediation effect and evaluated the field applicability of a methodology that combines pneumatic fracturing, vacuum extraction, and plasma blasting (the PPV method) to improve the limitations facing existing underground remediation methods. For comparison, underground remediation was performed over 80 days using the experimental PPV method and chemical oxidation (the control method). The control group showed no decrease in the degree of contamination due to the poor delivery of the soil remediation agent, whereas the PPV method clearly reduced the degree of contamination during the remediation period. Remediation effect, as assessed by the reduction of the highest TPH (Total Petroleum Hydrocarbons) concentration by distance from the injection well, was uncleared in the control group, whereas the PPV method showed a remediation effect of 62.6% within a 1 m radius of the injection well radius, 90.1% within 1.1~2.0 m, and 92.1% within 2.1~3.0 m. When evaluating the remediation efficiency by considering the average rate of TPH concentration reduction by distance from the injection well, the control group was not clear; in contrast, the PPV method showed 53.6% remediation effect within 1 m of the injection well, 82.4% within 1.1~2.0 m, and 68.7% within 2.1~3.0 m. Both ways of considering purification efficiency (based on changes in TPH maximum and average contamination concentration) found the PPV method to increase the remediation effect by 149.0~184.8% compared with the control group; its average increase in remediation effect was ~167%. The time taken to reduce contamination by 80% of the initial concentration was evaluated by deriving a correlation equation through analysis of the TPH concentration: the PPV method could reduce the purification time by 184.4% compared with chemical oxidation. However, the present evaluation of a single site cannot be equally applied to all strata, so additional research is necessary to explore more clearly the proposed method's effect.

Development of Tree Carbon Calculator to Support Landscape Design for the Carbon Reduction (탄소저감설계 지원을 위한 수목 탄소계산기 개발 및 적용)

  • Ha, Jee-Ah;Park, Jae-Min
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.1
    • /
    • pp.42-55
    • /
    • 2023
  • A methodology to predict the carbon performance of newly created urban greening plans is required as policies based on quantifying carbon performance are rapidly being introduced in the face of the climate crisis caused by global warming. This study developed a tree carbon calculator that can be used for carbon reduction designs in landscaping and attempted to verify its effectiveness in landscape design. For practical operability, MS Excel was selected as a format, and carbon absorption and storage by tree type and size were extracted from 93 representative species to reflect plant design characteristics. The database, including tree unit prices, was established to reflect cost limitations. A plantation experimental design to verify the performance of the tree carbon calculator was conducted by simulating the design of parks in the central region for four landscape design, and the causal relationship was analyzed by conducting semi-structured interviews before and after. As a result, carbon absorption and carbon storage in the design using the tree carbon calculator were about 17-82% and about 14-85% higher, respectively, compared to not using it. It was confirmed that the reason for the increase in carbon performance efficiency was that additional planting was actively carried out within a given budget, along with the replacement of excellent carbon performance species. Pre-interviews revealed that designers distrusted data and the burdens caused by new programs before using the arboreal carbon calculator but tended to change positively because of its usefulness and ease of use. In order to implement carbon reduction design in the landscaping field, it is necessary to develop it into a carbon calculator for trees and landscaping performance. This study is expected to present a useful direction for ntroducing carbon reduction designs based on quantitative data in landscape design.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

EU's Space Code of Conduct: Right Step Forward (EU의 우주행동강령의 의미와 평가)

  • Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.27 no.2
    • /
    • pp.211-241
    • /
    • 2012
  • The Draft International Code of Conduct for Outer Space Activities officially proposed by the European Union on the occasion of the 55th Session of the United Nations Peaceful Uses of the Outer Space last June 2012 in Vienna, Austria is to fill the lacunae of the relevant norms to be applied to the human activities in the outer space and thus has the merit our attention. The missing elements of the norms span from the prohibition of an arms race, safety and security of the space objects including the measures to reduce the space debris to the exchange of information of space activities among space-faring nations. The EU's initiatives, when implemented, cover or will eventually prepare for the forum to deal with such issues of interests of the international community. The EU's initiatives begun at the end of 2008 included the unofficial contacts with major space powers including in particular the USA of which position is believed to have been reflected in the Draft with the aim to have it adopted in 2013. Although the Code is made up of soft law rather than hard law for the subscribing countries, the USA seems to be afraid of the eventuality whereby its strategic advantages in the outer space will be affected by the prohibiting norms, possibly to be pursued by the Code from its current non-binding character, of placing weapons in the outer space. It is with this trepidation that the USA has been opposing to the adoption of the United Nations Assembly Resolutions on the prevention of an arms race in the outer space (PAROS) and in the same context to the setting-up of a working group on the arms race in the outer space in the frame of the Conference on Disarmament. China and Russia who together put forward a draft Treaty on Prevention of the Placement of Weapons in Outer Space and of the Threat or Use of Force against Outer Space Objects (PPWT) in 2008 would not feel comfortable either because the EU initiatives will steal the lime light. Consequently their reactions are understandably passive towards the Draft Code while the reaction of the USA to the PPWT was a clear cut "No". With the above background, the future of the EU Code is uncertain. Nevertheless, the purpose of the Code to reduce the space debris, to allow exchange of the information on the space activities, and to protect the space objects through safety and security, all to maximize the principle of the peaceful use and exploration of the outer space is the laudable efforts on the part of EU. When the detailed negotiations will be held, some problems including the cost to be incurred by setting up an office for the clerical works could be discussed for both efficient and economic mechanism. For example, the new clerical works envisaged in the Draft Code could be discharged by the current UN OOSA (Office for Outer Space Affairs) with minimal additional resources. The EU's initiatives are another meaningful contribution following one due to it in adopting the Kyoto Protocol of 1997 to the UNFCCC (UN Framework Convention on the Climate Change) and deserve the praise from the thoughtful international community.

  • PDF

Organizational Buying Behavior in an Interdependent World (상호의존세계중적조직구매행위(相互依存世界中的组织购买行为))

  • Wind, Yoram;Thomas, Robert J.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.110-122
    • /
    • 2010
  • The emergence of the field of organizational buying behavior in the mid-1960’s with the publication of Industrial Buying and Creative Marketing (1967) set the stage for a new paradigm of thinking about how business was conducted in markets other than those serving ultimate consumers. Whether it is "industrial marketing" or "business-to-business marketing" (B-to-B), organizational buying behavior remains the core differentiating characteristic of this domain of marketing. This paper explores the impact of several dynamic factors that have influenced how organizations relate to one another in a rapidly increasing interdependence, which in turn can impact organizational buying behavior. The paper also raises the question of whether or not the major conceptual models of organizational buying behavior in an interdependent world are still relevant to guide research and managerial thinking, in this dynamic business environment. The paper is structured to explore three questions related to organizational interdependencies: 1. What are the factors and trends driving the emergence of organizational interdependencies? 2. Will the major conceptual models of organizational buying behavior that have developed over the past half century be applicable in a world of interdependent organizations? 3. What are the implications of organizational interdependencies on the research and practice of organizational buying behavior? Consideration of the factors and trends driving organizational interdependencies revealed five critical drivers in the relationships among organizations that can impact their purchasing behavior: Accelerating Globalization, Flattening Networks of Organizations, Disrupting Value Chains, Intensifying Government Involvement, and Continuously Fragmenting Customer Needs. These five interlinked drivers of interdependency and their underlying technological advances can alter the relationships within and among organizations that buy products and services to remain competitive in their markets. Viewed in the context of a customer driven marketing strategy, these forces affect three levels of strategy development: (1) evolving customer needs, (2) the resulting product/service/solution offerings to meet these needs, and (3) the organization competencies and processes required to develop and implement the offerings to meet needs. The five drivers of interdependency among organizations do not necessarily operate independently in their impact on how organizations buy. They can interact with each other and become even more potent in their impact on organizational buying behavior. For example, accelerating globalization may influence the emergence of additional networks that further disrupt traditional value chain relationships, thereby changing how organizations purchase products and services. Increased government involvement in business operations in one country may increase costs of doing business and therefore drive firms to seek low cost sources in emerging markets in other countries. This can reduce employment opportunitiesn one country and increase them in another, further accelerating the pace of globalization. The second major question in the paper is what impact these drivers of interdependencies have had on the core conceptual models of organizational buying behavior. Consider the three enduring conceptual models developed in the Industrial Buying and Creative Marketing and Organizational Buying Behavior books: the organizational buying process, the buying center, and the buying situation. A review of these core models of organizational buying behavior, as originally conceptualized, shows they are still valid and not likely to change with the increasingly intense drivers of interdependency among organizations. What will change however is the way in which buyers and sellers interact under conditions of interdependency. For example, increased interdependencies can lead to increased opportunities for collaboration as well as conflict between buying and selling organizations, thereby changing aspects of the buying process. In addition, the importance of communication processes between and among organizations will increase as the role of trust becomes an important criterion for a successful buying relationship. The third question in the paper explored consequences and implications of these interdependencies on organizational buying behavior for practice and research. The following are considered in the paper: the need to increase understanding of network influences on organizational buying behavior, the need to increase understanding of the role of trust and value among organizational participants, the need to improve understanding of how to manage organizational buying in networked environments, the need to increase understanding of customer needs in the value network, and the need to increase understanding of the impact of emerging new business models on organizational buying behavior. In many ways, these needs deriving from increased organizational interdependencies are an extension of the conceptual tradition in organizational buying behavior. In 1977, Nicosia and Wind suggested a focus on inter-organizational over intra-organizational perspectives, a trend that has received considerable momentum since the 1990's. Likewise for managers to survive in an increasingly interdependent world, they will need to better understand the complexities of how organizations relate to one another. The transition from an inter-organizational to an interdependent perspective has begun, and must continue so as to develop an improved understanding of these important relationships. A shift to such an interdependent network perspective may require many academicians and practitioners to fundamentally challenge and change the mental models underlying their business and organizational buying behavior models. The focus can no longer be only on the dyadic relations of the buying organization and the selling organization but should involve all the related members of the network, including the network of customers, developers, and other suppliers and intermediaries. Consider for example the numerous partner networks initiated by SAP which involves over 9000 companies and over a million participants. This evolving, complex, and uncertain reality of interdependencies and dynamic networks requires reconsideration of how purchase decisions are made; as a result they should be the focus of the next phase of research and theory building among academics and the focus of practical models and experiments undertaken by practitioners. The hope is that such research will take place, not in the isolation of the ivory tower, nor in the confines of the business world, but rather, by increased collaboration of academics and practitioners. In conclusion, the consideration of increased interdependence among organizations revealed the continued relevance of the fundamental models of organizational buying behavior. However to increase the value of these models in an interdependent world, academics and practitioners should improve their understanding of (1) network influences, (2) how to better manage these influences, (3) the role of trust and value among organizational participants, (4) the evolution of customer needs in the value network, and (5) the impact of emerging new business models on organizational buying behavior. To accomplish this, greater collaboration between industry and academia is needed to advance our understanding of organizational buying behavior in an interdependent world.

The Evaluation of Forest-road Network Considering Optimum Forest-road Arrangement and Yarding Function (최적임도배치(最適林道配置) 및 집재기능(集材機能)을 고려(考慮)한 임도배치망(林道配置網) 평가(評價))

  • Park, Sang Jun;Bae, Sang Tae
    • Current Research on Agriculture and Life Sciences
    • /
    • v.19
    • /
    • pp.45-54
    • /
    • 2001
  • This study was carried out to provide fundamental data for prospective forest-road project and forest-road network arrangement through appraising existing forest-road network with density, extension distance, maximum yarding distance and yarding area, position of forest-road line considered above foundation of two theories, one is "theory of optimal forest-road density" which has expense for yarding cost and constructing forest-road minimized, the other is "theory of optimal forest-road arrangement" which has investment effect maximized. The results are as follows. 1. In density and extension distance of the forest-road by site, it was showed up that density of existing forest-road is lower than that of calculated forest-road. So, it is thought that some additional forest-roads have to be constructed. 2. In the arrangement of the forest-road network by site, it was showed up that the arrangement of calculated forest-road is higher than that of existing forest-road arrangement for the forestry and yarding function. So, it is thought that the arrangement of forest-road network have to be considered to maximize the investment effect. 3. In "mean maximum distance for yarding" and "mean area which yarding can be done" by horizontal and inclined distance, the existing forest-road networks were different from those of calculated forest-road network. So, calculated forest-road network making investment effect maximize is more effective than existing forest-road network. Hence, in prospective forest-road project, it is needed that forest-road network having "area which yarding can be done" maximized through considering function for yarding have to be constructed.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Evaluation of a colloid gel(Slime) as a body compensator for radiotherapy (Colloid gel(Slime)의 방사선 치료 시 표면 보상체로서의 유용성 평가)

  • Lee, Hun Hee;Kim, Chan Kyu;Song, Kwan Soo;Bang, Mun Kyun;Kang, Dong Yun;Sin, Dong Ho;Lee, Du Heon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.191-199
    • /
    • 2018
  • Purpose : In this study, we evaluated the usefulness of colloid gel(slime) as a compensator for irregular patient surfaces in radiation therapy. Materials and Methods : For this study, colloid gel suitable for treatment was made and four experiments were conducted to evaluate the applicability of radiation therapy. Trilogy(Varian) and CT(SOMATOM, Siemens) were used as treatment equipment and CT equipment. First, the homogeneity according to the composition of colloid gel was measured using EBT3 Film(RIT). Second, the Hounsfield Unit(HU) value of colloid gel was measured and confirmed by CRIS phantom, Eclipse RTP(Eclipse 13.1, Varian) and CT. Third, to measure the deformation and degeneration of colloid gel during the treatment period, it was measured 3 times daily for 2 weeks using an ion chamber(PTW-30013, PTW). The fourth experiment was compared the treatment plan and measured dose distributions using bolus, rice, colloid gel and additional, dose profiles in an environment similar to actual treatment using our own acrylic phantom. Result : First experiment, density of the colloid gel cases 1, 2 and 3 was $1.02g/cm^3$, $0.99g/cm^3$ and $0.96g/cm^3$. When the homogeneity was measured at 6 MV and 9 MeV, case 1 was more homogeneous than the other cases, as 1.55 and 1.98. In the second experiment, the HU values of case 1, 2, 3 were 15 and when the treatment plan was compared with the measured doses, the difference was within 1 % at all 9, 12 MeV and a difference of -1.53 % and -1.56 % within the whole 2 % at 6 MV. In the third experiment, the dose change of colloid gel was measured to be about 1 % for 2 weeks. In the fourth experiment, the dose difference between the treatment plan and EBT3 film was similar for both colloid gel and bolus, rice at 6 MV. But colloid gel showed less dose difference than bolus and rice at 9 MeV. Also, dose profile of colloid gel showed a more uniform dose distribution than the bolus and rice. Conclusion : In this study, the density of colloid gel prepared for radiation therapy was $1.02g/cm^3$ similar to the density of water, and alteration or deformation was not observed during the radiotherapy process. Although we pay attention to the density when manufacturing colloid gel, it is sufficient in that it can deliver the dose uniformly through the compensation of the patient's body surface more than the bolus and rice, and can be manufactured at low cost. Further studies and studies for clinical applications are expected to be applicable to radiation therapy.

  • PDF

The Relations between Financial Constraints and Dividend Smoothing of Innovative Small and Medium Sized Enterprises (혁신형 중소기업의 재무적 제약과 배당스무딩간의 관계)

  • Shin, Min-Shik;Kim, Soo-Eun
    • Korean small business review
    • /
    • v.31 no.4
    • /
    • pp.67-93
    • /
    • 2009
  • The purpose of this paper is to explore the relations between financial constraints and dividend smoothing of innovative small and medium sized enterprises(SMEs) listed on Korea Securities Market and Kosdaq Market of Korea Exchange. The innovative SMEs is defined as the firms with high level of R&D intensity which is measured by (R&D investment/total sales) ratio, according to Chauvin and Hirschey (1993). The R&D investment plays an important role as the innovative driver that can increase the future growth opportunity and profitability of the firms. Therefore, the R&D investment have large, positive, and consistent influences on the market value of the firm. In this point of view, we expect that the innovative SMEs can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. And also, we expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Aivazian et al.(2006) exert that the financial unconstrained firms with the high accessibility to capital market can adjust dividend payment faster than the financial constrained firms. We collect the sample firms among the total SMEs listed on Korea Securities Market and Kosdaq Market of Korea Exchange during the periods from January 1999 to December 2007 from the KIS Value Library database. The total number of firm-year observations of the total sample firms throughout the entire period is 5,544, the number of firm-year observations of the dividend firms is 2,919, and the number of firm-year observations of the non-dividend firms is 2,625. About 53%(or 2,919) of these total 5,544 observations involve firms that make a dividend payment. The dividend firms are divided into two groups according to the R&D intensity, such as the innovative SMEs with larger than median of R&D intensity and the noninnovative SMEs with smaller than median of R&D intensity. The number of firm-year observations of the innovative SMEs is 1,506, and the number of firm-year observations of the noninnovative SMEs is 1,413. Furthermore, the innovative SMEs are divided into two groups according to level of financial constraints, such as the financial unconstrained firms and the financial constrained firms. The number of firm-year observations of the former is 894, and the number of firm-year observations of the latter is 612. Although all available firm-year observations of the dividend firms are collected, deletions are made in the case of financial industries such as banks, securities company, insurance company, and other financial services company, because their capital structure and business style are widely different from the general manufacturing firms. The stock repurchase was involved in dividend payment because Grullon and Michaely (2002) examined the substitution hypothesis between dividends and stock repurchases. However, our data structure is an unbalanced panel data since there is no requirement that the firm-year observations data are all available for each firms during the entire periods from January 1999 to December 2007 from the KIS Value Library database. We firstly estimate the classic Lintner(1956) dividend adjustment model, where the decision to smooth dividend or to adopt a residual dividend policy depends on financial constraints measured by market accessibility. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between current payout rato and target payout ratio each year. In the Lintner model, dependent variable is the current dividend per share(DPSt), and independent variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt). We hypothesized that firms adjust partially the gap between the current dividend per share(DPSt) and the target payout ratio(Ω) each year, when the past dividend per share(DPSt-1) deviate from the target payout ratio(Ω). We secondly estimate the expansion model that extend the Lintner model by including the determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory. In the expansion model, dependent variable is the current dividend per share(DPSt), explanatory variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt), and control variables are the current capital expenditure ratio(CEAt), the current leverage ratio(LEVt), the current operating return on assets(ROAt), the current business risk(RISKt), the current trading volume turnover ratio(TURNt), and the current dividend premium(DPREMt). In these control variables, CEAt, LEVt, and ROAt are the determinants suggested by the residual dividend theory and the agency theory, ROAt and RISKt are the determinants suggested by the dividend signaling theory, TURNt is the determinant suggested by the transactions cost theory, and DPREMt is the determinant suggested by the catering theory. Furthermore, we thirdly estimate the Lintner model and the expansion model by using the panel data of the financial unconstrained firms and the financial constrained firms, that are divided into two groups according to level of financial constraints. We expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, because the former can finance more easily the investment funds through the market accessibility than the latter. We analyzed descriptive statistics such as mean, standard deviation, and median to delete the outliers from the panel data, conducted one way analysis of variance to check up the industry-specfic effects, and conducted difference test of firms characteristic variables between innovative SMEs and noninnovative SMEs as well as difference test of firms characteristic variables between financial unconstrained firms and financial constrained firms. We also conducted the correlation analysis and the variance inflation factors analysis to detect any multicollinearity among the independent variables. Both of the correlation coefficients and the variance inflation factors are roughly low to the extent that may be ignored the multicollinearity among the independent variables. Furthermore, we estimate both of the Lintner model and the expansion model using the panel regression analysis. We firstly test the time-specific effects and the firm-specific effects may be involved in our panel data through the Lagrange multiplier test that was proposed by Breusch and Pagan(1980), and secondly conduct Hausman test to prove that fixed effect model is fitter with our panel data than the random effect model. The main results of this study can be summarized as follows. The determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory explain significantly the dividend policy of the innovative SMEs. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between the current payout ratio and the target payout ratio each year. In the core variables of Lintner model, the past dividend per share has more effects to dividend smoothing than the current earnings per share. These results suggest that the innovative SMEs maintain stable and long run dividend policy which sustains the past dividend per share level without corporate special reasons. The main results show that dividend adjustment speed of the innovative SMEs is faster than that of the noninnovative SMEs. This means that the innovative SMEs with high level of R&D intensity can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. The other main results show that dividend adjustment speed of the financial unconstrained SMEs is faster than that of the financial constrained SMEs. This means that the financial unconstrained firms with high accessibility to capital market can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Futhermore, the other additional results show that dividend adjustment speed of the innovative SMEs classified by the Small and Medium Business Administration is faster than that of the unclassified SMEs. They are linked with various financial policies and services such as credit guaranteed service, policy fund for SMEs, venture investment fund, insurance program, and so on. In conclusion, the past dividend per share and the current earnings per share suggested by the Lintner model explain mainly dividend adjustment speed of the innovative SMEs, and also the financial constraints explain partially. Therefore, if managers can properly understand of the relations between financial constraints and dividend smoothing of innovative SMEs, they can maintain stable and long run dividend policy of the innovative SMEs through dividend smoothing. These are encouraging results for Korea government, that is, the Small and Medium Business Administration as it has implemented many policies to commit to the innovative SMEs. This paper may have a few limitations because it may be only early study about the relations between financial constraints and dividend smoothing of the innovative SMEs. Specifically, this paper may not adequately capture all of the subtle features of the innovative SMEs and the financial unconstrained SMEs. Therefore, we think that it is necessary to expand sample firms and control variables, and use more elaborate analysis methods in the future studies.