• Title/Summary/Keyword: index system

Search Result 6,519, Processing Time 0.046 seconds

Radiotherapy in Supraglottic Carcinoma - With Respect to Locoregional Control and Survival - (성문상부암의 방사선치료 -국소종양 제어율과 생존율을 중심으로-)

  • Nam Taek-Keun;Chung Woong-Ki;Cho Jae-Shik;Ahn Sung-Ja;Nah Byung-Sik;Oh Yoon-Kyeong
    • Radiation Oncology Journal
    • /
    • v.20 no.2
    • /
    • pp.108-115
    • /
    • 2002
  • Purpose : A retrospective study was undertaken to determine the role of conventional radiotherapy with or without surgery for treating a supraglottic carcinoma in terms of the local control and survival. Materials and Methods : From Jan. 1986 to Oct. 1996, a total of 134 patients were treated for a supraglottic carcinoma by radiotherapy with or without surgery. Of them, 117 patients who had completed the radiotherapy formed the base of this study. The patients were redistributed according to the revised AJCC staging system (1997). The number of patients of stage I, II, III, IVA, IVB were $6\;(5\%),\;16\;(14\%),\;53\;(45\%),\;32\;(27\%),\;10\;(9\%)$, respectively. Eighty patients were treated by radical radiotherapy in the range of $61.2\~79.2\;Gy$ (mean : 69.2 Gy) to the primary tumor and $45.0\~93.6\;Gy$ (mean : 54.0 Gy) to regional lymphatics. All patients with stage I and IVB were treated by radiotherapy alone. Thirty-seven patients underwent surgery plus postoperative radiotherapy in the range of $45.0\~68.4\;Gy$ (mean : 56.1 Gy) to the primary tumor bed and $45.0\~59.4\;Gy$ (mean : 47.2 Gy) to the regional lymphatics. Of them, 33 patients received a total laryngectomy (${\pm}lymph$ node dissection), three had a supraglottic horizontal laryngectomy (${\pm}lymph$ node dissection), and one had a primary excision alone. Results : The 5-year survival rate (5YSR) of all patients was $43\%$. The 5YSRs of the patients with stage I+II, III+IV were $49.9\%,\;41.2\%$, respectively (p=0.27). However, the disease-specific survival rate of the patients with stage I (n=6) was $100\%$. The 5YSRs of patients who underwent surgery plus radiotherapy (S+RT) vs radiotherapy alone (RT) in stage II, III, IVA were $100\%\;vs\;43\%$ (p=0.17), $62\%\;vs\;52\%$ (p=0.32), $58\%\;vs\;6\%$ (p<0.001), respectively. The 5-year actuarial locoregional control rate (5YLCR) of all the patients was $57\%$. The 5YLCR of the patients with stage I, II, III, IVA, IVB was $100\%,\;74\%,\;60\%,\;44\%,\;30\%$, respectively (p=0.008). The 5YLCR of the patients with S+RT vs RT in stage II, III, IVA was $100\%\;vs\;68\%$ (p=0.29), $67\%\;vs\;55\%$ (p=0.23), $81\%\;vs\;20\%$ (p<0.001), respectively. In the radiotherapy alone group, the 5YLCR of the patients with a complete, partial, and minimal response were $76\%,\;20\%,\;0\%$, respectively (p<0.001). In all patients, multivariate analysis showed that the N-stage, surgery or not, and age were significant factors affecting the survival rate and that the N-stage, surgery or not, and the ECOG performance index were significant factors affecting the locoregional control. In the radiotherapy alone group, multivariate analysis showed that the radiation response and N-stage were significant factors affecting the overall survival rate as well as locoregional control. Conclusion : In early stage supraglottic carcinoma, conventional radiotherapy alone is an equally effective modality compared to surgery plus radiotherapy and could preserve the laryngeal function. However, in the advanced stages, radiotherapy combined with concurrent chemotherapy for laryngeal preservation or surgery should be considered. In bulky neck disease, all the possible planned neck dissections after induction chemotherapy or before radiotherapy should be attempted.

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

A Comparative Study of Food Habits and Body Satisfaction of Middle School Students According to Clinical Symptoms (일부 남녀 중학생의 건강 관련 임상증상에 따른 식습관과 체헝관심도에 관한 연구)

  • Sung, Chung-Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.34 no.2
    • /
    • pp.202-208
    • /
    • 2005
  • This study was conducted to examine the food habits, knowledge of nutrition and actual conditions of food ingestion of adolescent middle school students according to questionnaire answers. Questionnaires were completed by 524 students, divided into a healthy group (n=289) and an unhealthy group (n=235) according to clinical signs. Further questions were asked of the two groups in the areas of food habits, knowledge of nutrition and nutritional attitude. The results were as follows: Mean age of all subjects was 14, heights for male and female students were 162.0 em, and 157.2 cm, weights were 53.4 kg, and 49.4, respectively. Heights and weights of male students were greater than those of female students. The body mass index (BMI) for male and female students was 20.3 kg/$m^2$ and 20.0 kg/$m^2$, respectively, and all data were within normal ranges. There were no significant differences in mean age, height, weight, and BMI between the healthy and unhealthy groups. There was no significant difference in body image recognition between the two groups, although the ratio of dissatisfaction with their own body shape was significantly higher in the female unhealthy group (46.1%), than in the female healthy group (33.0%) (p<0.05). In the area of the struggle to control body weight during the previous year, the female unhealthy group (59.4%) was higher than the female healthy group (38.4%) (p<0.01). There was no significant difference in the scores between the two groups in the areas of knowledge of nutrition and the nutritional attitude. Meal frequency and meal patterns were showed that having breakfast less than 4x/week was significantly higher in the female unhealthy group (44.0%), than in the female healthy group (30.7%) (p<0.01). Meal frequency for suppers<4x/week showed that the female unhealthy group (18.8%) was also higher than the female healthy group (10.7%). Therefore, the unhealthy group exhibited a higher pattern of missing both breakfast and supper. The male unhealthy group (16.7%) dined out more frequently than the male healthy group (12.3%) (p<0.01), and female unhealthy group also indulged in snacking significantly more frequently than the female healthy group. The unhealthy group also ate only 1 item for meals more frequently than the healthy group and no significant difference. The conclusion of this study is that adolescent Korean middle school students, who showed a higher incidence of clinical symptoms, representing an unhealthy status, missed breakfast and supper, and dined out and indulged in snacking more frequently. Their quality of breakfast and satisfaction of body image were also lower than the healthy group. These results indicated that there is a high correlation between a Korean adolescent's health status, food habits and body image satisfaction. It is recommended that a more intense program of nutritional education and monitoring be introduce into the current Korean middle-school system in order to optimally support and maximize the health potential of the current population of Korean student.

Derivation of Digital Music's Ranking Change Through Time Series Clustering (시계열 군집분석을 통한 디지털 음원의 순위 변화 패턴 분류)

  • Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.171-191
    • /
    • 2020
  • This study focused on digital music, which is the most valuable cultural asset in the modern society and occupies a particularly important position in the flow of the Korean Wave. Digital music was collected based on the "Gaon Chart," a well-established music chart in Korea. Through this, the changes in the ranking of the music that entered the chart for 73 weeks were collected. Afterwards, patterns with similar characteristics were derived through time series cluster analysis. Then, a descriptive analysis was performed on the notable features of each pattern. The research process suggested by this study is as follows. First, in the data collection process, time series data was collected to check the ranking change of digital music. Subsequently, in the data processing stage, the collected data was matched with the rankings over time, and the music title and artist name were processed. Each analysis is then sequentially performed in two stages consisting of exploratory analysis and explanatory analysis. First, the data collection period was limited to the period before 'the music bulk buying phenomenon', a reliability issue related to music ranking in Korea. Specifically, it is 73 weeks starting from December 31, 2017 to January 06, 2018 as the first week, and from May 19, 2019 to May 25, 2019. And the analysis targets were limited to digital music released in Korea. In particular, digital music was collected based on the "Gaon Chart", a well-known music chart in Korea. Unlike private music charts that are being serviced in Korea, Gaon Charts are charts approved by government agencies and have basic reliability. Therefore, it can be considered that it has more public confidence than the ranking information provided by other services. The contents of the collected data are as follows. Data on the period and ranking, the name of the music, the name of the artist, the name of the album, the Gaon index, the production company, and the distribution company were collected for the music that entered the top 100 on the music chart within the collection period. Through data collection, 7,300 music, which were included in the top 100 on the music chart, were identified for a total of 73 weeks. On the other hand, in the case of digital music, since the cases included in the music chart for more than two weeks are frequent, the duplication of music is removed through the pre-processing process. For duplicate music, the number and location of the duplicated music were checked through the duplicate check function, and then deleted to form data for analysis. Through this, a list of 742 unique music for analysis among the 7,300-music data in advance was secured. A total of 742 songs were secured through previous data collection and pre-processing. In addition, a total of 16 patterns were derived through time series cluster analysis on the ranking change. Based on the patterns derived after that, two representative patterns were identified: 'Steady Seller' and 'One-Hit Wonder'. Furthermore, the two patterns were subdivided into five patterns in consideration of the survival period of the music and the music ranking. The important characteristics of each pattern are as follows. First, the artist's superstar effect and bandwagon effect were strong in the one-hit wonder-type pattern. Therefore, when consumers choose a digital music, they are strongly influenced by the superstar effect and the bandwagon effect. Second, through the Steady Seller pattern, we confirmed the music that have been chosen by consumers for a very long time. In addition, we checked the patterns of the most selected music through consumer needs. Contrary to popular belief, the steady seller: mid-term pattern, not the one-hit wonder pattern, received the most choices from consumers. Particularly noteworthy is that the 'Climbing the Chart' phenomenon, which is contrary to the existing pattern, was confirmed through the steady-seller pattern. This study focuses on the change in the ranking of music over time, a field that has been relatively alienated centering on digital music. In addition, a new approach to music research was attempted by subdividing the pattern of ranking change rather than predicting the success and ranking of music.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

A Study on Estimating Shear Strength of Continuum Rock Slope (연속체 암반비탈면의 강도정수 산정 연구)

  • Kim, Hyung-Min;Lee, Su-gon;Lee, Byok-Kyu;Woo, Jae-Gyung;Hur, Ik;Lee, Jun-Ki
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.5
    • /
    • pp.5-19
    • /
    • 2019
  • Considering the natural phenomenon in which steep slopes ($65^{\circ}{\sim}85^{\circ}$) consisting of rock mass remain stable for decades, slopes steeper than 1:0.5 (the standard of slope angle for blast rock) may be applied in geotechnical conditions which are similar to those above at the design and initial construction stages. In the process of analysing the stability of a good to fair continuum rock slope that can be designed as a steep slope, a general method of estimating rock mass strength properties from design practice perspective was required. Practical and genealized engineering methods of determining the properties of a rock mass are important for a good continuum rock slope that can be designed as a steep slope. The Genealized Hoek-Brown (H-B) failure criterion and GSI (Geological Strength Index), which were revised and supplemented by Hoek et al. (2002), were assessed as rock mass characterization systems fully taking into account the effects of discontinuities, and were widely utilized as a method for calculating equivalent Mohr-Coulomb shear strength (balancing the areas) according to stress changes. The concept of calculating equivalent M-C shear strength according to the change of confining stress range was proposed, and on a slope, the equivalent shear strength changes sensitively with changes in the maximum confining stress (${{\sigma}^{\prime}}_{3max}$ or normal stress), making it difficult to use it in practical design. In this study, the method of estimating the strength properties (an iso-angle division method) that can be applied universally within the maximum confining stress range for a good to fair continuum rock mass slope is proposed by applying the H-B failure criterion. In order to assess the validity and applicability of the proposed method of estimating the shear strength (A), the rock slope, which is a study object, was selected as the type of rock (igneous, metamorphic, sedimentary) on the steep slope near the existing working design site. It is compared and analyzed with the equivalent M-C shear strength (balancing the areas) proposed by Hoek. The equivalent M-C shear strength of the balancing the areas method and iso-angle division method was estimated using the RocLab program (geotechnical properties calculation software based on the H-B failure criterion (2002)) by using the basic data of the laboratory rock triaxial compression test at the existing working design site and the face mapping of discontinuities on the rock slope of study area. The calculated equivalent M-C shear strength of the balancing the areas method was interlinked to show very large or small cohesion and internal friction angles (generally, greater than $45^{\circ}$). The equivalent M-C shear strength of the iso-angle division is in-between the equivalent M-C shear properties of the balancing the areas, and the internal friction angles show a range of $30^{\circ}$ to $42^{\circ}$. We compared and analyzed the shear strength (A) of the iso-angle division method at the study area with the shear strength (B) of the existing working design site with similar or the same grade RMR each other. The application of the proposed iso-angle division method was indirectly evaluated through the results of the stability analysis (limit equilibrium analysis and finite element analysis) applied with these the strength properties. The difference between A and B of the shear strength is about 10%. LEM results (in wet condition) showed that Fs (A) = 14.08~58.22 (average 32.9) and Fs (B) = 18.39~60.04 (average 32.2), which were similar in accordance with the same rock types. As a result of FEM, displacement (A) = 0.13~0.65 mm (average 0.27 mm) and displacement (B) = 0.14~1.07 mm (average 0.37 mm). Using the GSI and Hoek-Brown failure criterion, the significant result could be identified in the application evaluation. Therefore, the strength properties of rock mass estimated by the iso-angle division method could be applied with practical shear strength.

A Study on the Characteristics of Enterprise R&D Capabilities Using Data Mining (데이터마이닝을 활용한 기업 R&D역량 특성에 관한 탐색 연구)

  • Kim, Sang-Gook;Lim, Jung-Sun;Park, Wan
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.1-21
    • /
    • 2021
  • As the global business environment changes, uncertainties in technology development and market needs increase, and competition among companies intensifies, interests and demands for R&D activities of individual companies are increasing. In order to cope with these environmental changes, R&D companies are strengthening R&D investment as one of the means to enhance the qualitative competitiveness of R&D while paying more attention to facility investment. As a result, facilities or R&D investment elements are inevitably a burden for R&D companies to bear future uncertainties. It is true that the management strategy of increasing investment in R&D as a means of enhancing R&D capability is highly uncertain in terms of corporate performance. In this study, the structural factors that influence the R&D capabilities of companies are explored in terms of technology management capabilities, R&D capabilities, and corporate classification attributes by utilizing data mining techniques, and the characteristics these individual factors present according to the level of R&D capabilities are analyzed. This study also showed cluster analysis and experimental results based on evidence data for all domestic R&D companies, and is expected to provide important implications for corporate management strategies to enhance R&D capabilities of individual companies. For each of the three viewpoints, detailed evaluation indexes were composed of 7, 2, and 4, respectively, to quantitatively measure individual levels in the corresponding area. In the case of technology management capability and R&D capability, the sub-item evaluation indexes that are being used by current domestic technology evaluation agencies were referenced, and the final detailed evaluation index was newly constructed in consideration of whether data could be obtained quantitatively. In the case of corporate classification attributes, the most basic corporate classification profile information is considered. In particular, in order to grasp the homogeneity of the R&D competency level, a comprehensive score for each company was given using detailed evaluation indicators of technology management capability and R&D capability, and the competency level was classified into five grades and compared with the cluster analysis results. In order to give the meaning according to the comparative evaluation between the analyzed cluster and the competency level grade, the clusters with high and low trends in R&D competency level were searched for each cluster. Afterwards, characteristics according to detailed evaluation indicators were analyzed in the cluster. Through this method of conducting research, two groups with high R&D competency and one with low level of R&D competency were analyzed, and the remaining two clusters were similar with almost high incidence. As a result, in this study, individual characteristics according to detailed evaluation indexes were analyzed for two clusters with high competency level and one cluster with low competency level. The implications of the results of this study are that the faster the replacement cycle of professional managers who can effectively respond to changes in technology and market demand, the more likely they will contribute to enhancing R&D capabilities. In the case of a private company, it is necessary to increase the intensity of input of R&D capabilities by enhancing the sense of belonging of R&D personnel to the company through conversion to a corporate company, and to provide the accuracy of responsibility and authority through the organization of the team unit. Since the number of technical commercialization achievements and technology certifications are occurring both in the case of contributing to capacity improvement and in case of not, it was confirmed that there is a limit in reviewing it as an important factor for enhancing R&D capacity from the perspective of management. Lastly, the experience of utility model filing was identified as a factor that has an important influence on R&D capability, and it was confirmed the need to provide motivation to encourage utility model filings in order to enhance R&D capability. As such, the results of this study are expected to provide important implications for corporate management strategies to enhance individual companies' R&D capabilities.