• Title/Summary/Keyword: 대용

Search Result 6,377, Processing Time 0.031 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Clinical Outcomes of Corrective Surgical Treatment for Esophageal Cancer (식도암의 외과적 근치 절제술에 대한 임상적 고찰)

  • Ryu Se Min;Jo Won Min;Mok Young Jae;Kim Hyun Koo;Cho Yang Hyun;Sohn Young-sang;Kim Hark Jei;Choi Young Ho
    • Journal of Chest Surgery
    • /
    • v.38 no.2 s.247
    • /
    • pp.157-163
    • /
    • 2005
  • Background: Clinical outcomes of esophageal cancer have not been satisfactory in spite of the development of surgical skills and protocols of adjuvant therapy. We analyzed the results of corrective surgical patients for esophageal cancer from January 1992 to July 2002. Material and Method: Among 129 patients with esophageal cancer, this study was performed in 68 patients who received corrective surgery. The ratio of sex was 59 : 9 (male : female) and mean age was $61.07\pm7.36$ years old. Chief complaints of this patients were dysphagia, epigastric pain and weight loss, etc. The locations of esophageal cancer were 4 in upper esophagus, 36 in middle, 20 in lower, 8 in esophagogastric junction. 60 patients had squamous cell cancer and 7 had adenocarcinoma, and 1 had malignant melanoma. Five patients had neoadjuvant chemotherapy. Result: The postoperative stage I, IIA, IIB, III, IV patients were 7, 25, 12, 17 and 7, respectively. The conduit for replacement of esophagus were stomach (62 patients) and colon (6 patients). The neck anastomosis was performed in 28 patients and intrathoracic anastomosis in 40 patients. The technique of anastomosis were hand sewing method (44 patients) and stapling method (24 patients). One of the early complications was anastomosis leakage (3 patients) which had only radiologic leakage that recovered spontaneously. The anastomosis technique had no correlation with postoperative leakage, which stapling method (2 patients) and hand sewing method (1 patient). There were 3 respiratory failures, 6 pneumonia, 1 fulminant hepatitis, 1 bleeding and 1 sepsis. The 2 early postoperative deaths were fulminant hepatitis and sepsis. Among 68 patients, 23 patients had postoperative adjuvant therapy and 55 paitents were followed up. The follow up period was $23.73\pm22.18$ months ($1\~76$ month). There were 5 patients in stage I, 21 in stage 2A, 9 in stage IIB, 15 in stage III and 5 in stage IV. The 1, 3, 5 year survival rates of the patients who could be followed up completely was $58.43\pm6.5\%,\;35.48\pm7.5\%\;and\;18.81\pm7.7\%$, respectively. Statistical analysis showed that long-term survival difference was associated with a stage, T stage, and N stage (p<0.05) but not associated with histology, sex, anastomosis location, tumor location, and pre and postoperative adjuvant therapy. Conclusion: The early diagnosis, aggressive operative resection, and adequate postoperative treatment may have contributed to the observed increase in survival for esophageal cancer patients.

Radiation Therapy Using M3 Wax Bolus in Patients with Malignant Scalp Tumors (악성 두피 종양(Scalp) 환자의 M3 Wax Bolus를 이용한 방사선치료)

  • Kwon, Da Eun;Hwang, Ji Hye;Park, In Seo;Yang, Jun Cheol;Kim, Su Jin;You, Ah Young;Won, Young Jinn;Kwon, Kyung Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.75-81
    • /
    • 2019
  • Purpose: Helmet type bolus for 3D printer is being manufactured because of the disadvantages of Bolus materials when photon beam is used for the treatment of scalp malignancy. However, PLA, which is a used material, has a higher density than a tissue equivalent material and inconveniences occur when the patient wears PLA. In this study, we try to treat malignant scalp tumors by using M3 wax helmet with 3D printer. Methods and materials: For the modeling of the helmet type M3 wax, the head phantom was photographed by CT, which was acquired with a DICOM file. The part for helmet on the scalp was made with Helmet contour. The M3 Wax helmet was made by dissolving paraffin wax, mixing magnesium oxide and calcium carbonate, solidifying it in a PLA 3D helmet, and then eliminated PLA 3D Helmet of the surface. The treatment plan was based on Intensity-Modulated Radiation Therapy (IMRT) of 10 Portals, and the therapeutic dose was 200 cGy, using Analytical Anisotropic Algorithm (AAA) of Eclipse. Then, the dose was verified by using EBT3 film and Mosfet (Metal Oxide Semiconductor Field Effect Transistor: USA), and the IMRT plan was measured 3 times in 3 parts by reproducing the phantom of the head human model under the same condition with the CT simulation room. Results: The Hounsfield unit (HU) of the bolus measured by CT was $52{\pm}37.1$. The dose of TPS was 186.6 cGy, 193.2 cGy and 190.6 cGy at the M3 Wax bolus measurement points of A, B and C, and the dose measured three times at Mostet was $179.66{\pm}2.62cGy$, $184.33{\pm}1.24cGy$ and $195.33{\pm}1.69cGy$. And the error rates were -3.71 %, -4.59 %, and 2.48 %. The dose measured with EBT3 film was $182.00{\pm}1.63cGy$, $193.66{\pm}2.05cGy$ and $196{\pm}2.16cGy$. The error rates were -2.46 %, 0.23 % and 2.83 %. Conclusions: The thickness of the M3 wax bolus was 2 cm, which could help the treatment plan to be established by easily lowering the dose of the brain part. The maximum error rate of the scalp surface dose was measured within 5 % and generally within 3 %, even in the A, B, C measurements of dosimeters of EBT3 film and Mosfet in the treatment dose verification. The making period of M3 wax bolus is shorter, cheaper than that of 3D printer, can be reused and is very useful for the treatment of scalp malignancies as human tissue equivalent material. Therefore, we think that the use of casting type M3 wax bolus, which will complement the making period and cost of high capacity Bolus and Compensator in 3D printer, will increase later.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

Pre-operative Concurrent Chemoradiotherapy for Stage IlIA (N2) Non-Small Cell Lung Cancer (N2 병기 비소세포 폐암의 수술 전 동시화학방사선요법)

  • Lee, Kyu-Chan;Ahn, Yong-Chan;Park, Keunchil;Kim, Kwhan-Mien;Kim, Jhin-Gook;Shim, Young-Mog;Lim, Do-Hoon;Kim, Moon-Kyung;Shin, Kyung-Hwan;Kim, Dae-Yong;Huh, Seung-Jae;Rhee, Chong-Heon;Lee, Kyung-Soo
    • Radiation Oncology Journal
    • /
    • v.17 no.2
    • /
    • pp.100-107
    • /
    • 1999
  • Purpose: This is to evaluate the acute complication, resection rate, and tumor down-staging after pre-operative concurrent chemoradiotherapy for stage IIIA (N2) non-small cell lung cancer. Materials and Methods Fifteen patients with non-small cell lung cancer were enrolled in this study from May 1997 to June 1998 in Samsung Medical Center. The median age of the patients was 61 (range, 45~67) years and male to female ratio was 12:3. Pathologic types were squamous cell carcinoma (11) and adenocarcinoma (4). Pre-operative clinical tumor stages were cT1 in 2 patients, cT2 in T2, and cT3 in 1 and all were N2. Ten patients were proved to be N2 with mediastinoscopic biopsy and five had clinically evident mediastinal Iymph node metastases on the chest CT scans. Pre-operative radiation therapy field included the primary tumor, the ipsilateral hilum, and the mediastinum. Total radiation dose was 45 Gy over 5 weeks with daily dose of 1.8 Gy. Pre-operative concurrent chemotherapy consisted of two cycles of intravenous cis-Platin (100 mg/m$^{2}$) on day 1 and oral Etoposide (50 mg/m$^{2}$/day) on days 1 through 14 with 4 weeks' interval. Surgery was followed after the pre-operative re-evaluation including chest CT scan in 3 weeks of the completion of the concurrent chemoradiotherapy if there was no evidence of disease progression. Results : Full dose radiation therapy was administered to all the 15 patients. Planned two cycles of chemotherapy was completed in 11 patients and one cycle was given to four. One treatment related death of acute respiratory distress syndrome occurred In 15 days of surgery. Hospital admission was required in three patients including one with radiation pneumonitis and two with neutropenic fever. Hematologic complications and other acute complications including esophagitis were tolerable. Resection rate was 92.3% (12/l3) in 13 patients excluding two patients who refused surgery. Pleural seeding was found in one patient after thoracotomy and tumor resection was not feasible. Post-operative tumor stagings were pT0 in 3 patients, pTl in 6, and pT2 in 3. Lymph node status findings were pN0 in 8 patients, pN1 in 1, and pN2 in 3. Pathologic tumor down-staging was 61.5% (8/13) including complete response in three patients ($23.7%). Tumor stage was unchanged in four patients (30.8%) and progression was in one (7.7%). Conclusions : Pre-operative concurrent chemoradiotherapy for Stage IIIA (N2) non-small cell lung cancer demonstrated satisfactory results with no increased severe acute complications. This treatment shceme deserves more patinet accrual with long-term follow-up.

  • PDF

The Relations between Financial Constraints and Dividend Smoothing of Innovative Small and Medium Sized Enterprises (혁신형 중소기업의 재무적 제약과 배당스무딩간의 관계)

  • Shin, Min-Shik;Kim, Soo-Eun
    • Korean small business review
    • /
    • v.31 no.4
    • /
    • pp.67-93
    • /
    • 2009
  • The purpose of this paper is to explore the relations between financial constraints and dividend smoothing of innovative small and medium sized enterprises(SMEs) listed on Korea Securities Market and Kosdaq Market of Korea Exchange. The innovative SMEs is defined as the firms with high level of R&D intensity which is measured by (R&D investment/total sales) ratio, according to Chauvin and Hirschey (1993). The R&D investment plays an important role as the innovative driver that can increase the future growth opportunity and profitability of the firms. Therefore, the R&D investment have large, positive, and consistent influences on the market value of the firm. In this point of view, we expect that the innovative SMEs can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. And also, we expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Aivazian et al.(2006) exert that the financial unconstrained firms with the high accessibility to capital market can adjust dividend payment faster than the financial constrained firms. We collect the sample firms among the total SMEs listed on Korea Securities Market and Kosdaq Market of Korea Exchange during the periods from January 1999 to December 2007 from the KIS Value Library database. The total number of firm-year observations of the total sample firms throughout the entire period is 5,544, the number of firm-year observations of the dividend firms is 2,919, and the number of firm-year observations of the non-dividend firms is 2,625. About 53%(or 2,919) of these total 5,544 observations involve firms that make a dividend payment. The dividend firms are divided into two groups according to the R&D intensity, such as the innovative SMEs with larger than median of R&D intensity and the noninnovative SMEs with smaller than median of R&D intensity. The number of firm-year observations of the innovative SMEs is 1,506, and the number of firm-year observations of the noninnovative SMEs is 1,413. Furthermore, the innovative SMEs are divided into two groups according to level of financial constraints, such as the financial unconstrained firms and the financial constrained firms. The number of firm-year observations of the former is 894, and the number of firm-year observations of the latter is 612. Although all available firm-year observations of the dividend firms are collected, deletions are made in the case of financial industries such as banks, securities company, insurance company, and other financial services company, because their capital structure and business style are widely different from the general manufacturing firms. The stock repurchase was involved in dividend payment because Grullon and Michaely (2002) examined the substitution hypothesis between dividends and stock repurchases. However, our data structure is an unbalanced panel data since there is no requirement that the firm-year observations data are all available for each firms during the entire periods from January 1999 to December 2007 from the KIS Value Library database. We firstly estimate the classic Lintner(1956) dividend adjustment model, where the decision to smooth dividend or to adopt a residual dividend policy depends on financial constraints measured by market accessibility. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between current payout rato and target payout ratio each year. In the Lintner model, dependent variable is the current dividend per share(DPSt), and independent variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt). We hypothesized that firms adjust partially the gap between the current dividend per share(DPSt) and the target payout ratio(Ω) each year, when the past dividend per share(DPSt-1) deviate from the target payout ratio(Ω). We secondly estimate the expansion model that extend the Lintner model by including the determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory. In the expansion model, dependent variable is the current dividend per share(DPSt), explanatory variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt), and control variables are the current capital expenditure ratio(CEAt), the current leverage ratio(LEVt), the current operating return on assets(ROAt), the current business risk(RISKt), the current trading volume turnover ratio(TURNt), and the current dividend premium(DPREMt). In these control variables, CEAt, LEVt, and ROAt are the determinants suggested by the residual dividend theory and the agency theory, ROAt and RISKt are the determinants suggested by the dividend signaling theory, TURNt is the determinant suggested by the transactions cost theory, and DPREMt is the determinant suggested by the catering theory. Furthermore, we thirdly estimate the Lintner model and the expansion model by using the panel data of the financial unconstrained firms and the financial constrained firms, that are divided into two groups according to level of financial constraints. We expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, because the former can finance more easily the investment funds through the market accessibility than the latter. We analyzed descriptive statistics such as mean, standard deviation, and median to delete the outliers from the panel data, conducted one way analysis of variance to check up the industry-specfic effects, and conducted difference test of firms characteristic variables between innovative SMEs and noninnovative SMEs as well as difference test of firms characteristic variables between financial unconstrained firms and financial constrained firms. We also conducted the correlation analysis and the variance inflation factors analysis to detect any multicollinearity among the independent variables. Both of the correlation coefficients and the variance inflation factors are roughly low to the extent that may be ignored the multicollinearity among the independent variables. Furthermore, we estimate both of the Lintner model and the expansion model using the panel regression analysis. We firstly test the time-specific effects and the firm-specific effects may be involved in our panel data through the Lagrange multiplier test that was proposed by Breusch and Pagan(1980), and secondly conduct Hausman test to prove that fixed effect model is fitter with our panel data than the random effect model. The main results of this study can be summarized as follows. The determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory explain significantly the dividend policy of the innovative SMEs. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between the current payout ratio and the target payout ratio each year. In the core variables of Lintner model, the past dividend per share has more effects to dividend smoothing than the current earnings per share. These results suggest that the innovative SMEs maintain stable and long run dividend policy which sustains the past dividend per share level without corporate special reasons. The main results show that dividend adjustment speed of the innovative SMEs is faster than that of the noninnovative SMEs. This means that the innovative SMEs with high level of R&D intensity can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. The other main results show that dividend adjustment speed of the financial unconstrained SMEs is faster than that of the financial constrained SMEs. This means that the financial unconstrained firms with high accessibility to capital market can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Futhermore, the other additional results show that dividend adjustment speed of the innovative SMEs classified by the Small and Medium Business Administration is faster than that of the unclassified SMEs. They are linked with various financial policies and services such as credit guaranteed service, policy fund for SMEs, venture investment fund, insurance program, and so on. In conclusion, the past dividend per share and the current earnings per share suggested by the Lintner model explain mainly dividend adjustment speed of the innovative SMEs, and also the financial constraints explain partially. Therefore, if managers can properly understand of the relations between financial constraints and dividend smoothing of innovative SMEs, they can maintain stable and long run dividend policy of the innovative SMEs through dividend smoothing. These are encouraging results for Korea government, that is, the Small and Medium Business Administration as it has implemented many policies to commit to the innovative SMEs. This paper may have a few limitations because it may be only early study about the relations between financial constraints and dividend smoothing of the innovative SMEs. Specifically, this paper may not adequately capture all of the subtle features of the innovative SMEs and the financial unconstrained SMEs. Therefore, we think that it is necessary to expand sample firms and control variables, and use more elaborate analysis methods in the future studies.