• Title/Summary/Keyword: 3-dimensional structure

Search Result 2,501, Processing Time 0.04 seconds

A Study on Movement Characteristics of Dalgubal Drum Dance (달구벌 북춤 춤사위의 특성에 대한 고찰)

  • Choi, Won-sun
    • (The) Research of the performance art and culture
    • /
    • no.42
    • /
    • pp.147-181
    • /
    • 2021
  • Dalgubal drum dance is inherited in a recreated form by incorporating regional symbolism and the dance philosophy and artisticity of Young Hwangbo, the creator, based on the traditional drum dance of the Yeongnam region. This dance having popularity with the transformation of traditional Korean culture has been invited not only to Yeongnam region including Daegu but also to international various venues. This study explores what the movement characteristics of this Dalgubal drum dance are and the unique charm and symbolic meaning of this dance. Specific analysis was conducted through analyzing Dalgubal drum dance video film of the 89th Korean Myeongmujeon's by using Laban Movement Analysis as a research method. The special features of this dance resulted from the LMA analysis in terms of the four categories-Body, Effort, Shape, and Space-reveal simple yet cheerful personalities and strong yet patient characteristics of the people in Daegu. The harmony of drum sounds(music) and movements(dance) creates various characteristics of dances and reveals the beauty and excitement of unique Korean dance. In particular, drum play and its related dance movements create curved linear spatial pattern of arm movements, Spiral Shape in body posture, and diverse floor patterns occupying whole stage space. These movements show the three-dimensional spatial beauty and the artistic ideas for recreation of traditional drum dance, which considered with the spatial structure of the proscenium stage. In addition, the well-organized structure and harmonious movements of this dance show the traditional Korean philosophy, implying heaven, earth, and human being and the wholeness, and the harmony of yin and yang. The dance aims at communication between the audiences and dancers through sharing excitement and the aesthetic beauty of dance. This can be interpreted as a meaningful expression of traditional Korean philosophy developed with the unique value and characteristics of Korean dance.

Numerical Test for the 2D Q Tomography Inversion Based on the Stochastic Ground-motion Model (추계학적 지진동모델에 기반한 2D Q 토모그래피 수치모델 역산)

  • Yun, Kwan-Hee;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.3
    • /
    • pp.191-202
    • /
    • 2007
  • To identify the detailed attenuation structure in the southern Korean Peninsula, a numerical test was conducted for the Q tomography inversion to be applied to the accumulated dataset until 2005. In particular, the stochastic pointsource ground-motion model (STGM model; Boore, 2003) was adopted for the 2D Q tomography inversion for direct application to simulating the strong ground-motion. Simultaneous inversion of the STGM model parameters with a regional single Q model was performed to evaluate the source and site effects which were necessary to generate an artificial dataset for the numerical test. The artificial dataset consists of simulated Fourier spectra that resemble the real data in the magnitude-distance-frequency-error distribution except replacement of the regional single Q model with a checkerboard type of high and low values of laterally varying Q models. The total number of Q blocks used for the checkerboard test was 75 (grid size of $35{\times}44km^2$ for Q blocks); Q functional form of $Q_0f^{\eta}$ ($Q_0$=100 or 500, 0.0 < ${\eta}$ < 1.0) was assigned to each Q block for the checkerboard test. The checkerboard test has been implemented in three steps. At the first step, the initial values of Q-values for 75 blocks were estimated. At the second step, the site amplification function was estimated by using the initial guess of A(f) which is the mean site amplification functions (Yun and Suh, 2007) for the site class. The last step is to invert the tomographic Q-values of 75 blocks based on the results of the first and second steps. As a result of the checkerboard test, it was demonstrated that Q-values could be robustly estimated by using the 2D Q tomography inversion method even in the presence of perturbed source and site effects from the true input model.

The Study on the Anssolim Technnique of Columns of Main-hall Architectures in Korean Palaces (궁궐 정전건축 기둥 안쏠림기법 고찰)

  • Kim, Derk Moon
    • Korean Journal of Heritage: History & Science
    • /
    • v.43 no.2
    • /
    • pp.40-59
    • /
    • 2010
  • Anssolim is the unique technique which standing columns lean in a inward direction of buildings in traditional architecture, which has not been thoroughly investigated to this day. With a dearth of previous studies, the anssolim technique can only be examined through detailed three-dimensional surveys. The main halls of Korean palaces can be seen as buildings that were built with the regulations of the day in mind, making them excellent research subjects when studying the anssolim technique. The findings can be summarized as follows. 1. In the main halls that were studied, anssolim was applied most to main space (eokan) columns, then lessened for peripheral columns. 2. The largest second-floor cheoma columns were placed inward in the eokan, then became smaller as with the peripheral columns. In the case of the eokan, the columns were arranged according to the size of the anssolim. 3. The second-floor cheoma column anssolim in the middle-floor main hall were generally a third or a quarter of the size of those on the first floor. As on the first floor, the largest anssolim were applied to the eokan columns, then became gradually smaller towards the periphery columns. 4. In the palace main halls, the largest anssolim were used for the eokan columns, and became smaller with the peripheral columns. This unique structure can be seen to be a Korean technique that deviates from the Chinese "Yingzaofashi(營造法式)" techniques. Although this study is limited in that it only studies the main hall of Korean palaces, it is significant in that it shed new light on the technological implications of the anssolim technique, and can be used as important data for research into the history of technology. Although this type of data is difficult to extrapolate, it has been made as accurate as possible by minimizing the margin of error in the data for the palaces that were actually studied.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Application of Borehole Radar to Tunnel Detection (시추공 레이다 탐사에 의한 지하 터널 탐지 적용성 연구)

  • Cho, Seong-Jun;Kim, Jung-Ho;Kim, Chang-Ryol;Son, Jeong-Sul;Sung, Nak-Hun
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.4
    • /
    • pp.279-290
    • /
    • 2006
  • The borehole radar methods used to tunnel detection are mainly classified into borehole radar reflection, directional antenna, crosshole scanning, and radar tomography methods. In this study, we have investigated the feasibility and limitation of each method to tunnel detection through case studies. In the borehole radar reflection data, there were much more clear diffraction signals of the upper wings than lower wings of the hyperbolas reflected from the tunnel, and their upper and lower wings were spreaded out to more than 10m higher and lower traces from the peaks of the hyperbolas. As the ratio of borehole diameter to antenna length increases, the ringing gets stronger on the data due to the increase in the impedance mismatching between antennas and water in the boreholes. It is also found that the reflection signals from the tunnel could be enhanced using the optimal offset distance between transmitter and receiver antennas. Nevertheless, the borehole radar reflection data could not provide directional information of the reflectors in the subsurface. Direction finding antenna system had a advantage to take a three dimensional location of a tunnel with only one borehole survey even though the cost is still very high and it required very high expertise. The data from crosshole scanning could be a good indicator for tunnel detection and it could give more reliable result when the borehole radar reflection survey is carried out together. The images of the subsurface also can be reconstructed using travel time tomography which could provide the physical property of the medium and would be effective for imaging the underground structure such as tunnels. Based on the results described above, we suggest a cost-effective field procedure for detection of a tunnel using borehole radar techniques; borehole radar reflection survey using dipole antenna can firstly be applied to pick up anomalous regions within the borehole, and crosshole scanning or reflection survey using directional antenna can then be applied only to the anomalous regions to detect the tunnel.

Stress distribution of molars restored with minimal invasive and conventional technique: a 3-D finite element analysis (최소 침습적 충진 및 통상적 인레이 법으로 수복한 대구치의 응력 분포: 3-D 유한 요소 해석)

  • Yang, Sunmi;Kim, Seon-mi;Choi, Namki;Kim, Jae-hwan;Yang, Sung-Pyo;Yang, Hongso
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.34 no.4
    • /
    • pp.297-305
    • /
    • 2018
  • Purpose: This study aimed to analyze stress distribution and maximum von Mises stress generated in intracoronal restorations and in tooth structures of mandibular molars with various types of cavity designs and materials. Materials and Methods: Three-dimensional solid models of mandible molar such as O inlay cavity with composite and gold (OR-C, OG-C), MO inlay cavity with composite and gold (MR-C, MG-C), and minimal invasive cavity on occlusal and proximal surfaces (OR-M, MR-M) were designed. To simulate masticatory force, static axial load with total force of 200 N was applied on the tooth at 10 occlusal contact points. A finite element analysis was performed to predict stress distribution generated by occlusal loading. Results: Restorations with minimal cavity design generated significantly lower values of von Mises stress (OR-M model: 26.8 MPa; MR-M model: 72.7 MPa) compared to those with conventional cavity design (341.9 MPa to 397.2 MPa). In tooth structure, magnitudes of maximum von Mises stresses were similar among models with conventional design (372.8 - 412.9 MPa) and models with minimal cavity design (361.1 - 384.4 MPa). Conclusion: Minimal invasive models generated smaller maximum von Mises stresses within restorations. Within the enamel, similar maximum von Mises stresses were observed for models with minimal cavity design and those with conventional design.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2016 (설비공학 분야의 최근 연구 동향 : 2016년 학회지 논문에 대한 종합적 고찰)

  • Lee, Dae-Young;Kim, Sa Ryang;Kim, Hyun-Jung;Kim, Dong-Seon;Park, Jun-Seok;Ihm, Pyeong Chan
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.29 no.6
    • /
    • pp.327-340
    • /
    • 2017
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2016. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) The research works on the thermal and fluid engineering have been reviewed as groups of flow, heat and mass transfer, the reduction of pollutant exhaust gas, cooling and heating, the renewable energy system and the flow around buildings. CFD schemes were used more for all research areas. (2) Research works on heat transfer area have been reviewed in the categories of heat transfer characteristics, pool boiling and condensing heat transfer and industrial heat exchangers. Researches on heat transfer characteristics included the results of the long-term performance variation of the plate-type enthalpy exchange element made of paper, design optimization of an extruded-type cooling structure for reducing the weight of LED street lights, and hot plate welding of thermoplastic elastomer packing. In the area of pool boiling and condensing, the heat transfer characteristics of a finned-tube heat exchanger in a PCM (phase change material) thermal energy storage system, influence of flow boiling heat transfer on fouling phenomenon in nanofluids, and PCM at the simultaneous charging and discharging condition were studied. In the area of industrial heat exchangers, one-dimensional flow network model and porous-media model, and R245fa in a plate-shell heat exchanger were studied. (3) Various studies were published in the categories of refrigeration cycle, alternative refrigeration/energy system, system control. In the refrigeration cycle category, subjects include mobile cold storage heat exchanger, compressor reliability, indirect refrigeration system with $CO_2$ as secondary fluid, heat pump for fuel-cell vehicle, heat recovery from hybrid drier and heat exchangers with two-port and flat tubes. In the alternative refrigeration/energy system category, subjects include membrane module for dehumidification refrigeration, desiccant-assisted low-temperature drying, regenerative evaporative cooler and ejector-assisted multi-stage evaporation. In the system control category, subjects include multi-refrigeration system control, emergency cooling of data center and variable-speed compressor control. (4) In building mechanical system research fields, fifteenth studies were reported for achieving effective design of the mechanical systems, and also for maximizing the energy efficiency of buildings. The topics of the studies included energy performance, HVAC system, ventilation, renewable energies, etc. Proposed designs, performance tests using numerical methods and experiments provide useful information and key data which could be help for improving the energy efficiency of the buildings. (5) The field of architectural environment was mostly focused on indoor environment and building energy. The main researches of indoor environment were related to the analyses of indoor thermal environments controlled by portable cooler, the effects of outdoor wind pressure in airflow at high-rise buildings, window air tightness related to the filling piece shapes, stack effect in core type's office building and the development of a movable drawer-type light shelf with adjustable depth of the reflector. The subjects of building energy were worked on the energy consumption analysis in office building, the prediction of exit air temperature of horizontal geothermal heat exchanger, LS-SVM based modeling of hot water supply load for district heating system, the energy saving effect of ERV system using night purge control method and the effect of strengthened insulation level to the building heating and cooling load.

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.