• Title/Summary/Keyword: 선형 시스템

Search Result 5,768, Processing Time 0.038 seconds

Equilibrium Fractionation of Clumped Isotopes in H2O Molecule: Insights from Quantum Chemical Calculations (양자화학 계산을 이용한 H2O 분자의 Clumped 동위원소 분배특성 분석)

  • Sehyeong Roh;Sung Keun Lee
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.36 no.4
    • /
    • pp.355-363
    • /
    • 2023
  • In this study, we explore the nature of clumped isotopes of H2O molecule using quantum chemical calculations. Particularly, we estimated the relative clumping strength between diverse isotopologues, consisting of oxygen (16O, 17O, and 18O) and hydrogen (hydrogen, deuterium, and tritium) isotopes and quantify the effect of temperature on the extent of isotope clumping. The optimized equilibrium bond lengths and the bond angles of the molecules are 0.9631-0.9633 Å and 104.59-104.62°, respectively, and show a negligible variation among the isotopologues. The calculated frequencies of the modes of H2O molecules decrease as isotope mass number increases, and show a more prominent change with varying hydrogen isotopes over those with oxygen isotopes. The equilibrium constants of isotope substitution reactions involving these isotopologues reveal a greater effect of hydrogen mass number than oxygen mass number. The calculated equilibrium constants of clumping reaction for four heavy isotopologues showed a strong correlation; particularly, the relative clumping strength of three isotopologues was 1.86 times (HT18O), 1.16 times (HT17O), and 0.703 times (HD17O) relative to HD18O, respectively. The relative clumping strength decreases with increasing temperature, and therefore, has potential for a novel paleo-temperature proxy. The current calculation results highlight the first theoretical study to establish the nature of clumped isotope fractions in H2O including 17O and tritium. The current results help to account for diverse geochemical processes in earth's surface environments. Future efforts include the calculations of isotope fractionations among various phases of H2O isotopologues with a full consideration of the effect of anharmonicity in molecular vibration.

A Comparative Study on Factors Affecting Satisfaction by Travel Purpose for Urban Demand Response Transport Service: Focusing on Sejong Shucle (도심형 수요응답 교통서비스의 통행목적별 만족도 영향요인 비교연구: 세종특별자치시 셔클(Shucle)을 중심으로)

  • Wonchul Kim;Woo Jin Han;Juntae Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.132-141
    • /
    • 2024
  • In this study, the differences in user satisfaction and the variables influencing the satisfaction with demand response transport (DRT) by travel purpose were compared. The purpose of DRT travel was divided into commuting/school and shopping/leisure travel. A survey conducted on 'Shucle' users in Sejong City was used for the analysis and the least absolute shrinkage and selection operator (LASSO) regression analysis was applied to minimize the overfitting problems of the multilinear model. The results of the analysis confirmed the possibility that the introduction of the DRT service could eliminate the blind spot in the existing public transportation, reduce the use of private cars, encourage low-carbon and public transportation revitalization policies, and provide optimal transportation services to people who exhibit intermittent travel behaviors (e.g., elderly people, housewives, etc.). In addition, factors such as the waiting time after calling a DRT, travel time after boarding the DRT, convenience of using the DRT app, punctuality of expected departure/arrival time, and location of pickup and drop-off points were the common factors that positively influenced the satisfaction of users of the DRT services during their commuting/school and shopping/leisure travel. Meanwhile, the method of transfer to other transport modes was found to affect satisfaction only in the case of commuting/school travel, but not in the case of shopping/leisure travel. To activate the DRT service, it is necessary to consider the five influencing factors analyzed above. In addition, the differentiating factors between commuting/school and shopping/leisure travel were also identified. In the case of commuting/school travel, people value time and consider it to be important, so it is necessary to promote the convenience of transfer to other transport modes to reduce the total travel time. Regarding shopping/leisure travel, it is necessary to consider ways to create a facility that allows users to easily and conveniently designate the location of the pickup and drop-off point.

Development of Conformal Radiotherapy with Respiratory Gate Device (호흡주기에 따른 방사선입체조형치료법의 개발)

  • Chu Sung Sil;Cho Kwang Hwan;Lee Chang Geol;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.20 no.1
    • /
    • pp.41-52
    • /
    • 2002
  • Purpose : 3D conformal radiotherapy, the optimum dose delivered to the tumor and provided the risk of normal tissue unless marginal miss, was restricted by organ motion. For tumors in the thorax and abdomen, the planning target volume (PTV) is decided including the margin for movement of tumor volumes during treatment due to patients breathing. We designed the respiratory gating radiotherapy device (RGRD) for using during CT simulation, dose planning and beam delivery at identical breathing period conditions. Using RGRD, reducing the treatment margin for organ (thorax or abdomen) motion due to breathing and improve dose distribution for 3D conformal radiotherapy. Materials and Methods : The internal organ motion data for lung cancer patients were obtained by examining the diaphragm in the supine position to find the position dependency. We made a respiratory gating radiotherapy device (RGRD) that is composed of a strip band, drug sensor, micro switch, and a connected on-off switch in a LINAC control box. During same breathing period by RGRD, spiral CT scan, virtual simulation, and 3D dose planing for lung cancer patients were peformed, without an extended PTV margin for free breathing, and then the dose was delivered at the same positions. We calculated effective volumes and normal tissue complication probabilities (NTCP) using dose volume histograms for normal lung, and analyzed changes in doses associated with selected NTCP levels and tumor control probabilities (TCP) at these new dose levels. The effects of 3D conformal radiotherapy by RGRD were evaluated with DVH (Dose Volume Histogram), TCP, NTCP and dose statistics. Results : The average movement of a diaphragm was 1.5 cm in the supine position when patients breathed freely. Depending on the location of the tumor, the magnitude of the PTV margin needs to be extended from 1 cm to 3 cm, which can greatly increase normal tissue irradiation, and hence, results in increase of the normal tissue complications probabiliy. Simple and precise RGRD is very easy to setup on patients and is sensitive to length variation (+2 mm), it also delivers on-off information to patients and the LINAC machine. We evaluated the treatment plans of patients who had received conformal partial organ lung irradiation for the treatment of thorax malignancies. Using RGRD, the PTV margin by free breathing can be reduced about 2 cm for moving organs by breathing. TCP values are almost the same values $(4\~5\%\;increased)$ for lung cancer regardless of increasing the PTV margin to 2.0 cm but NTCP values are rapidly increased $(50\~70\%\;increased)$ for upon extending PTV margins by 2.0 cm. Conclusion : Internal organ motion due to breathing can be reduced effectively using our simple RGRD. This method can be used in clinical treatments to reduce organ motion induced margin, thereby reducing normal tissue irradiation. Using treatment planning software, the dose to normal tissues was analyzed by comparing dose statistics with and without RGRD. Potential benefits of radiotherapy derived from reduction or elimination of planning target volume (PTV) margins associated with patient breathing through the evaluation of the lung cancer patients treated with 3D conformal radiotherapy.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Experimental investigation of the photoneutron production out of the high-energy photon fields at linear accelerator (고에너지 방사선치료 시 치료변수에 따른 광중성자 선량 변화 연구)

  • Kim, Yeon Su;Yoon, In Ha;Bae, Sun Myeong;Kang, Tae Young;Baek, Geum Mun;Kim, Sung Hwan;Nam, Uk Won;Lee, Jae Jin;Park, Yeong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.257-264
    • /
    • 2014
  • Purpose : Photoneutron dose in high-energy photon radiotherapy at linear accelerator increase the risk for secondary cancer. The purpose of this investigation is to evaluate the dose variation of photoneutron with different treatment method, flattening filter, dose rate and gantry angle in radiation therapy with high-energy photon beam ($E{\geq}8MeV$). Materials and Methods : TrueBeam $ST{\time}TM$(Ver1.5, Varian, USA) and Korea Tissue Equivalent Proportional Counter (KTEPC) were used to detect the photoneutron dose out of the high-energy photon field. Complex Patient plans using Eclipse planning system (Version 10.0, Varian, USA) was used to experiment with different treatment technique(IMRT, VMAT), condition of flattening filter and three different dose rate. Scattered photoneutron dose was measured at eight different gantry angles with open field (Field size : $5{\time}5cm$). Results : The mean values of the detected photoneutron dose from IMRT and VMAT were $449.7{\mu}Sv$, $2940.7{\mu}Sv$. The mean values of the detected photoneutron dose with Flattening Filter(FF) and Flattening Filter Free(FFF) were measured as $2940.7{\mu}Sv$, $232.0{\mu}Sv$. The mean values of the photoneutron dose for each test plan (case 1, case 2 and case 3) with FFF at the three different dose rate (400, 1200, 2400 MU/min) were $3242.5{\mu}Sv$, $3189.4{\mu}Sv$, $3191.2{\mu}Sv$ with case 1, $3493.2{\mu}Sv$, $3482.6{\mu}Sv$, $3477.2{\mu}Sv$ with case 2 and $4592.2{\mu}Sv$, $4580.0{\mu}Sv$, $4542.3{\mu}Sv$ with case 3, respectively. The mean values of the photoneutron dose at eight different gantry angles ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, $135^{\circ}$, $180^{\circ}$, $225^{\circ}$, $270^{\circ}$, $315^{\circ}$) were measured as $3.2{\mu}Sv$, $4.3{\mu}Sv$, $5.3{\mu}Sv$, $11.3{\mu}Sv$, $14.7{\mu}Sv$, $11.2{\mu}Sv$, $3.7{\mu}Sv$, $3.0{\mu}Sv$ at 10MV and as $373.7{\mu}Sv$, $369.6{\mu}Sv$, $384.4{\mu}Sv$, $423.6{\mu}Sv$, $447.1{\mu}Sv$, $448.0{\mu}Sv$, $384.5{\mu}Sv$, $377.3{\mu}Sv$ at 15MV. Conclusion : As a result, it is possible to reduce photoneutron dose using FFF mode and VMAT method with TrueBeam $ST{\time}TM$. The risk for secondary cancer of the patients will be decreased with continuous evaluation of the photoneutron dose.

A Study on the Forest Yield Regulation by Systems Analysis (시스템분석(分析)에 의(依)한 삼림수확조절(森林收穫調節)에 관(關)한 연구(硏究))

  • Cho, Eung-hyouk
    • Korean Journal of Agricultural Science
    • /
    • v.4 no.2
    • /
    • pp.344-390
    • /
    • 1977
  • The purpose of this paper was to schedule optimum cutting strategy which could maximize the total yield under certain restrictions on periodic timber removals and harvest areas from an industrial forest, based on a linear programming technique. Sensitivity of the regulation model to variations in restrictions has also been analyzed to get information on the changes of total yield in the planning period. The regulation procedure has been made on the experimental forest of the Agricultural College of Seoul National University. The forest is composed of 219 cutting units, and characterized by younger age group which is very common in Korea. The planning period is devided into 10 cutting periods of five years each, and cutting is permissible only on the stands of age groups 5-9. It is also assumed in the study that the subsequent forests are established immediately after cutting existing forests, non-stocked forest lands are planted in first cutting period, and established forests are fully stocked until next harvest. All feasible cutting regimes have been defined to each unit depending on their age groups. Total yield (Vi, k) of each regime expected in the planning period has been projected using stand yield tables and forest inventory data, and the regime which gives highest Vi, k has been selected as a optimum cutting regime. After calculating periodic yields and cutting areas, and total yield from the optimum regimes selected without any restrictions, the upper and lower limits of periodic yields(Vj-max, Vj-min) and those of periodic cutting areas (Aj-max, Aj-min) have been decided. The optimum regimes under such restrictions have been selected by linear programming. The results of the study may be summarized as follows:- 1. The fluctuations of periodic harvest yields and areas under cutting regimes selected without restrictions were very great, because of irregular composition of age classes and growing stocks of existing stands. About 68.8 percent of total yield is expected in period 10, while none of yield in periods 6 and 7. 2. After inspection of the above solution, restricted optimum cutting regimes were obtained under the restrictions of Amin=150 ha, Amax=400ha, $Vmin=5,000m^3$ and $Vmax=50,000m^3$, using LP regulation model. As a result, about $50,000m^3$ of stable harvest yield per period and a relatively balanced age group distribution is expected from period 5. In this case, the loss in total yield was about 29 percent of that of unrestricted regimes. 3. Thinning schedule could be easily treated by the model presented in the study, and the thinnings made it possible to select optimum regimes which might be effective for smoothing the wood flows, not to speak of increasing total yield in the planning period. 4. It was known that the stronger the restrictions becomes in the optimum solution the earlier the period comes in which balanced harvest yields and age group distribution can be formed. There was also a tendency in this particular case that the periodic yields were strongly affected by constraints, and the fluctuations of harvest areas depended upon the amount of periodic yields. 5. Because the total yield was decreased at the increasing rate with imposing stronger restrictions, the Joss would be very great where strict sustained yield and normal age group distribution are required in the earlier periods. 6. Total yield under the same restrictions in a period was increased by lowering the felling age and extending the range of cutting age groups. Therefore, it seemed to be advantageous for producing maximum timber yield to adopt wider range of cutting age groups with the lower limit at which the smallest utilization size of timber could be produced. 7. The LP regulation model presented in the study seemed to be useful in the Korean situation from the following point of view: (1) The model can provide forest managers with the solution of where, when, and how much to cut in order to best fulfill the owners objective. (2) Planning is visualized as a continuous process where new strateges are automatically evolved as changes in the forest environment are recognized. (3) The cost (measured as decrease in total yield) of imposing restrictions can be easily evaluated. (4) Thinning schedule can be treated without difficulty. (5) The model can be applied to irregular forests. (6) Traditional regulation methods can be rainforced by the model.

  • PDF