• Title/Summary/Keyword: performance test

Search Result 21,561, Processing Time 0.064 seconds

Respiratory Gas Exchange and Ventilatory Functions at Maximal Exercise (최대운동시의 호흡성 가스교환 및 환기기능)

  • Cho, Yong-Keun;Jung, Tae-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.6
    • /
    • pp.900-912
    • /
    • 1995
  • Background: Although graded exercise stress tests are widely used for the evaluation of cardiorespiratory performance, normal standards on respiratory gas exchange and ventilatory functions at maximal exercise in Koreans have not been well established. The purpose of this study is to provide reference values on these by sex and age, along with derivation of some of their prediction equations. Method: Symptom-limited maximal exercise test was carried out by Bruce protocol in 1,000 healthy adults consisting of 603 males and 397 females, aged 20~66 years. Among them VC, $FEV_1$ and MVV were also determined in 885 cases. All the subjects were members of a health center, excluding athletes. During the exercise, subjects were allowed to hold on to front hand rail of the treadmill for safety purpose. Results: The $VO_2\;max/m^2$, $VCO_2\;max/m^2$ and $V_E\;max/m^2$ were greater in males than in females and decreased with age. The RR max in men and women was similar but decreased slightly with age. The $V_T$ max was markedly greater in men but showed no significant changes with age in either gender. The mean of $V_T$ max/VC, $V_E$ max/MVV and BR revealed that there were considerable ventilatory reserves at maximal exercise even in older females. The regression equations of the cardinal parameters obtained using exercise time(ET, min), age(A, yr), height(Ht, cm), weight(W, kg), sex(S, 0=male; 1=female), VC(L), $FEV_1$(L) and $V_E$ max(L) as variables are as follows: $VO_2\;max/m^2$(L/min)=1.449+0.073 ET-0.007A+0.010W-0.006Ht-0.209S, $VCO_2\;max/m^2$(L/min)=1.672+0.063ET-0.008A+0.010W-0.005Ht-0.319S, VE max/$m^2$(L/min)=58.161+1.503ET-0.315A-9.871S or VE max/$m^2$(L/min)=47.873+6.548 $FEV_1$-5.715 S, and VT max(L)=1.497+0.223VC-0.493S. Conclusion: Respiratory gas exchange and ventilatory variables at maximal exercise were studied in 1,000 non-athletes by Bruce protocol. During exercise, the subjects were allowed to hold on to hand rail of the treadmill for safety purpose. We feel that our results would provide ideal target values for patients and healthy individuals to be achieved, since our study subjects were members of a health center whose physical fitness levels were presumably higher than ordinary population.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

The Plan of Dose Reduction by Measuring and Evaluating Occupationally Exposed Dose in vivo Tests of Nuclear Medicine (핵의학 체내검사 업무 단계 별 피폭선량 측정 및 분석을 통한 피폭선량 감소 방안)

  • Kil, Sang-Hyeong;Lim, Yeong-Hyeon;Park, Kwang-Youl;Jo, Kyung-Nam;Kim, Jung-Hun;Oh, Ji-Eun;Lee, Sang-Hyup;Lee, Su-Jung;Jun, Ji-Tak;Jung, Eui-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.26-32
    • /
    • 2010
  • Purpose: It is to find the way to minimize occupationally exposed dose for workers in vivo tests in each working stage within the range of the working environment which does not ruin the examination and the performance efficiency. Materials and Methods: The process of the nuclear tests in vivo using a radioactive isotope consists of radioisotope distribution, a radioisotope injection ($^{99m}Tc$, $^{18}F$-FDG), and scanning and guiding patients. Using a measuring instrument of RadEye-G10 gamma survey meter (Thermo SCIENTIFIC), the exposure doses in each working stage are measured and evaluated. Before the radioisotope injection the patients are explained about the examination and educated about matters that require attention. It is to reduce the meeting time with the patients. In addition, workers are also educated about the outside exposure and have to put on the protected devices. When the radioisotope is injected to the patients the exposure doses are measured due to whether they are in the protected devices or not. It is also measured due to whether there are the explanation about the examination and the education about matters that require attention or not. The total exposure dose is visualized into the graph in using Microsoft office excel 2007. The difference of this doses are analyzed by wilcoxon signed ranks test in using SPSS (statistical package for the social science) program 12.0. In this case of p<0.01, this study is reliable in the statistics. Results: It was reliable in the statistics that the exposure dose of injecting $^{99m}Tc$-DPD 20 mCi in wearing the protected devices showed 88% smaller than the dose of injecting it without the protected devices. However, it was not reliable in the statistics that the exposure dose of injecting $^{18}F$-FDG 10 mCi with wearing protected devices had 26% decrease than without them. Training before injecting $^{99m}Tc$-DPD 20 mCi to patient made the exposure dose drop to 63% comparing with training after the injection. The dose of training before injecting $^{18}F$-FDG 10 mCi had 52% less then the training after the injection. Both of them were reliable in the statistics. Conclusion: In the examination of using the radioisotope $^{99m}Tc$, wearing the protected devices are more effective to reduce the exposure dose than without wearing them. In the case of using $^{18}F$-FDG, reducing meeting time with patients is more effective to drop the exposure dose. Therefore if we try to protect workers from radioactivity according to each radioisotope characteristic it could be more effective and active radiation shield from radioactivity.

  • PDF

Mature Market Sub-segmentation and Its Evaluation by the Degree of Homogeneity (동질도 평가를 통한 실버세대 세분군 분류 및 평가)

  • Bae, Jae-ho
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • As the population, buying power, and intensity of self-expression of the elderly generation increase, its importance as a market segment is also growing. Therefore, the mass marketing strategy for the elderly generation must be changed to a micro-marketing strategy based on the results of sub-segmentation that suitably captures the characteristics of this generation. Furthermore, as a customer access strategy is decided by sub-segmentation, proper segmentation is one of the key success factors for micro-marketing. Segments or sub-segments are different from sectors, because segmentation or sub-segmentation for micro-marketing is based on the homogeneity of customer needs. Theoretically, complete segmentation would reveal a single voice. However, it is impossible to achieve complete segmentation because of economic factors, factors that affect effectiveness, etc. To obtain a single voice from a segment, we sometimes need to divide it into many individual cases. In such a case, there would be a many segments to deal with. On the other hand, to maximize market access performance, fewer segments are preferred. In this paper, we use the term "sub-segmentation" instead of "segmentation," because we divide a specific segment into more detailed segments. To sub-segment the elderly generation, this paper takes their lifestyles and life stages into consideration. In order to reflect these aspects, various surveys and several rounds of expert interviews and focused group interviews (FGIs) were performed. Using the results of these qualitative surveys, we can define six sub-segments of the elderly generation. This paper uses five rules to divide the elderly generation. The five rules are (1) mutually exclusive and collectively exhaustive (MECE) sub-segmentation, (2) important life stages, (3) notable lifestyles, (4) minimum number of and easy classifiable sub-segments, and (5) significant difference in voices among the sub-segments. The most critical point for dividing the elderly market is whether children are married. The other points are source of income, gender, and occupation. In this paper, the elderly market is divided into six sub-segments. As mentioned, the number of sub-segments is a very key point for a successful marketing approach. Too many sub-segments would lead to narrow substantiality or lack of actionability. On the other hand, too few sub-segments would have no effects. Therefore, the creation of the optimum number of sub-segments is a critical problem faced by marketers. This paper presents a method of evaluating the fitness of sub-segments that was deduced from the preceding surveys. The presented method uses the degree of homogeneity (DoH) to measure the adequacy of sub-segments. This measure uses quantitative survey questions to calculate adequacy. The ratio of significantly homogeneous questions to the total numbers of survey questions indicates the DoH. A significantly homogeneous question is defined as a question in which one case is selected significantly more often than others. To show whether a case is selected significantly more often than others, we use a hypothesis test. In this case, the null hypothesis (H0) would be that there is no significant difference between the selection of one case and that of the others. Thus, the total number of significantly homogeneous questions is the total number of cases in which the null hypothesis is rejected. To calculate the DoH, we conducted a quantitative survey (total sample size was 400, 60 questions, 4~5 cases for each question). The sample size of the first sub-segment-has no unmarried offspring and earns a living independently-is 113. The sample size of the second sub-segment-has no unmarried offspring and is economically supported by its offspring-is 57. The sample size of the third sub-segment-has unmarried offspring and is employed and male-is 70. The sample size of the fourth sub-segment-has unmarried offspring and is not employed and male-is 45. The sample size of the fifth sub-segment-has unmarried offspring and is female and employed (either the female herself or her husband)-is 63. The sample size of the last sub-segment-has unmarried offspring and is female and not employed (not even the husband)-is 52. Statistically, the sample size of each sub-segment is sufficiently large. Therefore, we use the z-test for testing hypotheses. When the significance level is 0.05, the DoHs of the six sub-segments are 1.00, 0.95, 0.95, 0.87, 0.93, and 1.00, respectively. When the significance level is 0.01, the DoHs of the six sub-segments are 0.95, 0.87, 0.85, 0.80, 0.88, and 0.87, respectively. These results show that the first sub-segment is the most homogeneous category, while the fourth has more variety in terms of its needs. If the sample size is sufficiently large, more segmentation would be better in a given sub-segment. However, as the fourth sub-segment is smaller than the others, more detailed segmentation is not proceeded. A very critical point for a successful micro-marketing strategy is measuring the fit of a sub-segment. However, until now, there have been no robust rules for measuring fit. This paper presents a method of evaluating the fit of sub-segments. This method will be very helpful for deciding the adequacy of sub-segmentation. However, it has some limitations that prevent it from being robust. These limitations include the following: (1) the method is restricted to only quantitative questions; (2) the type of questions that must be involved in calculation pose difficulties; (3) DoH values depend on content formation. Despite these limitations, this paper has presented a useful method for conducting adequate sub-segmentation. We believe that the present method can be applied widely in many areas. Furthermore, the results of the sub-segmentation of the elderly generation can serve as a reference for mature marketing.

  • PDF

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

The Effect of Pulmonary Rehabilitation in Patients with Chronic Lung Disease (만성 폐질환 환자에서의 호흡재활치료의 효과)

  • Choe, Kang Hyeon;Park, Young Joo;Cho, Won Kyung;Lim, Chae Man;Lee, Sang Do;Koh, Youn Suck;Kim, Woo Sung;Kim, Dong Soon;Kim, Won Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.5
    • /
    • pp.736-745
    • /
    • 1996
  • Background : It is known that pulmonary rehabilitation improves dyspnea and exercise tolerance in patient with chronic lung disease, although it does not improve pulmonary function. But there is a controversy whether this improvement after pulmonary rehabilitation is due to increased aerobic exercise capacity. We performed this study to evaluate the effect of pulmonary rehabilitation for 6 weeks on the pulmonary function, gas exchange, exercise tolerance and aerobic exercise capacity in patients with chronic lung disease. Methods : Pulmonary rehabilitations including education, muscle strengthening exercise and symptom-Umited aerobic exercise for six weeks, were performed in fourteen patients with chronic lung disease (COPD 11, bronchiectasis 1, IPF 1, sarcoidosis 1 ; mean age $57{\pm}4$ years; male 12, female 2). Pre- and post-rehabilitaion pulmonary function and exercise capacity were compared. Results: 1) Before the rehabilitation, FVC, $FEV_1$ and $FEF_{25-75%}$ of the patients were $71.5{\pm}6.4%$. $40.6{\pm}3.4%$ and $19.3{\pm}3.8%$ of predicted value respectively. TLC, FRC and RV were $130.3{\pm}9.3%$, $157.3{\pm}13.2%$ and $211.1{\pm}23.9%$ predicted respectively. Diffusing capacity and MVV were $59.1{\pm}1.1%$ and $48.6{\pm}6.2%$. These pulmonary functions did not change after pulmonary rehabilitation. 2) In the incremental exercise test using bicycle ergometer, maximum work rale ($57.7{\pm}4.9$) watts vs. $64.8{\pm}6.0$ watts, P=0.036), maximum oxygen consumption ($0.81{\pm}0.07$ L/min vs. $0.96{\mu}0.08$ L/min, P=0.009) and anaerobic threshold ($0.60{\pm}0.06$ L/min vs. $0.76{\mu}0.06$ L/min, P=0.009) were significantly increased after pulmonary rehabilitation. There was no improvement in gas exchange after rehabilitation. 3) Exercise endurances of upper ($4.5{\pm}0.7$ joule vs. $14.8{\pm}2.4$ joule, P<0.001) and lower extremity ($25.4{\pm}5.7$ joule vs. $42.6{\pm}7.7$ joule, P<0.001), and 6 minute walking distance ($392{\pm}35$ meter vs. $459{\pm}33$ meter, P<0.001) were significantly increased after rehabilitation. Maximum inspiratory pressure was also increased after rehabilitation ($68.5{\pm}5.4$ $CmH_2O$ VS. $80.4{\pm}6.4$ $CmH_2O$, P<0.001). Conclusion: The pulmonary rehabilitation for 6 weeks can improve exercise performance in patients with chronic lung disease.

  • PDF

A Study on Improvement on National Legislation for Sustainable Progress of Space Development Project (우주개발사업의 지속발전을 위한 국내입법의 개선방향에 관한 연구)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.1
    • /
    • pp.97-158
    • /
    • 2010
  • The purpose of this paper is to research on the contents and improvement of national legislations relating to space development in Korea to make the sustainable progress of space development project in Korea. Korea has launched its first satellite KITST-1 in 1992. The National Space Committee has established "The Space Development Promotion Basic Plan" in 2007. The plan addressed the development of total 13 satellites by 2010 and the space launch vehicle by 2020, and the launch of moon exploration spaceship by 2021. Korea has built the space center at Oinarodo, Goheng Province in June 2009. In Korea the first small launch vehicle KSLV-1 was launched at the Naro Space Center in August 2009, and its second launch was made in June 2010. The United Nations has adopted five treaties relating to the development of outer space as follows : The Outer Space Treaty of 1967, the Rescue and Return Agreement of 1968, the Liability Convention of 1972, the Registration Convention of 1974, and the Moon Treaty of 1979. All five treaties has come into force. Korea has ratified the Outer Space Treaty, the Rescue and Return Agreement, the Liability Convention and the Registration Convention excepting the Moon Treaty. Most of development countries have enacted the national legislation relating to the development of our space as follows : The National Aeronautic and Space Act of 1958 and the Commercial Space Act of 1998 in the United States, Outer Space Act of 1986 in England, Establishment Act of National Space Center of 1961 in France, Canadian Space Agency Act of 1990 in Canada, Space Basic Act of 2008 in Japan, and Law on Space Activity of 1993 in Russia. There are currently three national legislations relating to space development in Korea as follows : Aerospace Industry Development Promotion Act of 1987, Outer Space Development Promotion Act of 2005, Outer Space Damage Compensation Act of 2008. The Ministry of Knowledge Economy of Korea has announced the Full Amendment Draft of Aerospace Industry Development Promotion Act in December 2009, and it's main contents are as follows : (1) Changing the title of Act into Aerospace Industry Promotion Act, (2) Newly regulating the definition of air flight test place, etc., (3) Establishment of aerospace industry basic plan, establishment of aerospace industry committee, (4) Project for promoting aerospace industry, (5) Exploration development, international joint development, (6) Cooperative research development, (7) Mutual benefit project, (8) Project for furthering basis of aerospace industry, (9) Activating cluster of aerospace industry, (10) Designation of air flight test place, etc., (11) Abolishing the designation and assistance of specific enterprise, (12) Abolishing the inspection of performance and quality. The Outer Space Development Promotion Act should be revised with regard to the following matters : (1) Overlapping problem in legal system between the Outer Space Development Promotion Act and the Aerospace industry Development promotion Act, (2) Distribution and adjustment problem of the national research development budget for space development between National Space Committee and National Science Technology Committee, (3) Consideration and preservation of environment in space development, (4) Taking the legal action and maintaining the legal system for policy and regulation relating to space development. The Outer Space Damage Compensation Act should be revised with regard to the following matters : (1) Definition of space damage and indirect damage, (2) Currency unit of limit of compensation liability, (3) Joint liability and compensation claim right of launching person of space object, (4) Establishment of Space Damage Compensation Council. In Korea, it will be possible to make a space tourism in 2013, and it is planned to introduce and operate a manned spaceship in 2013. Therefore, it is necessary to develop the policy relating to the promotion of commercial space transportation industry. Also it is necessary to make the proper maintenance of the current Aviation Law and space development-related laws and regulations for the promotion of space transportation industry in Korea.

  • PDF

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Facile [11C]PIB Synthesis Using an On-cartridge Methylation and Purification Showed Higher Specific Activity than Conventional Method Using Loop and High Performance Liquid Chromatography Purification (Loop와 HPLC Purification 방법보다 더 높은 비방사능을 보여주는 카트리지 Methylation과 Purification을 이용한 손쉬운 [ 11C]PIB 합성)

  • Lee, Yong-Seok;Cho, Yong-Hyun;Lee, Hong-Jae;Lee, Yun-Sang;Jeong, Jae Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.67-73
    • /
    • 2018
  • $[^{11}C]PIB$ synthesis has been performed by a loop-methylation and HPLC purification in our lab. However, this method is time-consuming and requires complicated systems. Thus, we developed an on-cartridge method which simplified the synthetic procedure and reduced time greatly by removing HPLC purification step. We compared 6 different cartridges and evaluated the $[^{11}C]PIB$ production yields and specific activities. $[^{11}C]MeOTf$ was synthesized by using TRACERlab FXC Pro and was transferred into the cartridge by blowing with helium gas for 3 min. To remove byproducts and impurities, cartridges were washed out by 20 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ solution (pH 5.1) and 10 mL of distilled water. And then, $[^{11}C]PIB$ was eluted by 5 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ into the collecting vial containing 10 mL saline. Among the 6 cartridges, only tC18 environmental cartridge could remove impurities and byproducts from $[^{11}C]PIB$ completely and showed higher specific activity than traditional HPLC purification method. This method took only 8 ~ 9 min from methylation to formulation. For the tC18 environmental cartridge and conventional HPLC loop methods, the radiochemical yields were $12.3{\pm}2.2%$ and $13.9{\pm}4.4%$, respectively, and the molar activities were $420.6{\pm}20.4GBq/{\mu}mol$ (n=3) and $78.7{\pm}39.7GBq/{\mu}mol$ (n=41), respectively. We successfully developed a facile on-cartridge methylation method for $[^{11}C]PIB$ synthesis which enabled the procedure more simple and rapid, and showed higher molar radio-activity than HPLC purification method.