• Title/Summary/Keyword: Variable S/A

Search Result 6,573, Processing Time 0.048 seconds

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The National Survey of Acute Respiratory Distress Syndrome in Korea (급성호흡곤란증후군의 전국 실태조사 보고)

  • Scientific Subcommittee for National Survey of Acute Respiratory Distress Syndrome in Korean Academy of Tuberculosis and Respiratory Disease
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.1
    • /
    • pp.25-43
    • /
    • 1997
  • Introduction : The outcome and incidence of acute respiratory distress syndrome (ARDS) could be variable related to the varied definitions used for ARDS by researchers. The purpose of the national survey was to define the risk factors of ARDS and investigate the prognostic indicies related to mortality of ARDS in Korea according to the definition of ARDS determined by the American-European Concensus Conference on 1992 year. Methods : A Multicenter registry of 48 University or University-affliated hospital and 18 general hospital s equipped with more than 400 patient's beds conducted over 13 months of patients with acute respiratory distress syndrome using the same registry protocol. Results : 1. In the 12 months of the registry, 167 patients were enrolled at the 24 hospitals. 2. The mean age was 56.5 years (${\pm}17.2$ years) and there was a 1.9:1 ratio of males to females. 3. Sepsis was the most common risk factors (78.1%), followed by aspiration (16.6%), trauma (11.6%), and shock (8.5%). 4 The overall mortality rate was 71.9%. The mean duration was 11 days (${\pm}13.1$ days) from the diagnosis of ARDS to the death. Respiratory insufficiency appeared to be a major cause in 43.7% of the deaths followed by sepsis (36.1%), heart failure (7.6%) and hepatic failure (6.7%). 5. There were no significant differences in mortality based on sex or age. No significant difference in mortality in infectious versus noninfectious causes of ARDS was found. 6. There were significant differences in the pulse rate, platelet numbers, serum albumin and glucose levels, the amounts of 24 hour urine, arterial pH, $Pa0_2$, $PaCO_2$, $Sa0_2$, alveolar-arterial oxygen differences, $PaO_2/FIO_2$, and PEEP/$FI0_2$ between the survivors and the deaths on study days 1 through 6 of the first week after enrollment. 7. The survivors had significantly less organ failure and lower APACHE III scores at the time of diagnosis of ARDS (P<0.05). 8. The numbers of organ failure (odd ratio 1.95, 95% confidence intervals:1.05-3.61, P=0.03) and the score of APACHE III (odd ratio 1.59, 95% confidence interval:1.01-2.50, P=0.04) appeared to be independent risk factors of the mortality in the patients with ARDS. Conclusions : The mortality was 71.9% of total 167 patients in this investigation using the definition of American-European Consensus Conference on 1992 year, and the respiratory insufficiency was the leading cause of the death. In addition, the numbers of organ failure and the score of APACHE III at the time of diagnosis of ARDS appeared to be independent risk factors of the mortality in the patients with ARDS.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

A Study on the Factors Influencing Technology Innovation Capability on the Knowledge Management Performance of the Company: Focused on Government Small and Medium Venture Business R&D Business (기술혁신역량이 기업의 지식경영성과에 미치는 요인에 관한 연구: 정부 중소벤처기업 R&D사업을 중심으로)

  • Seol, Dong-Cheol;Park, Cheol-Woo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.15 no.4
    • /
    • pp.193-216
    • /
    • 2020
  • Due to the recent mid- to long-term slump and falling growth rates in the global economy, interest in organizational structures that create new products or services as a new alternative to survive and develop in an opaque environment both internally and externally, and enhance organizational sustainability through changes in production methods and business innovation is increasing day by day. In this atmosphere, we agree that the growth of small and medium-sized venture companies has a significant impact on the national economy, and various efforts are being made to enhance the technological innovation capabilities of the members so that these small and medium-sized venture companies can enhance and sustain their performance. The purpose of this study is also to investigate how the technological innovation capabilities of small and medium-sized venture companies correlate with the performance of knowledge management and to analyze the role of network capabilities to organize the strategic activities of enterprise to obtain the resources and organizational capabilities to be used for value creation from external networks. In other words, research was conducted on the impact of technological innovation capabilities of small and medium venture companies on knowledge management performance by using network capabilities as parameters. Therefore, in this study, we would like to verify the hypothesis that innovation capabilities will have a positive impact on knowledge management performance by using network capabilities of small and medium venture companies. Economic activities based on technological innovation capabilities should respond quickly to new changes in an environment where uncertainty has increased, and lead to macro-economic growth and development as well as overcoming long-term economic downturns so that they can become the nation's new growth engine as well as sustainable growth and survival of the organization. In addition, this study was conducted by setting the most important knowledge management performance within the organization as a dependent variable. As a result, R&D and learning capabilities among technological innovation capabilities have no impact on financial performance. In contrast, it was shown that corporate innovation activities have a positive impact on both financial and non-financial performance. The fact that non-financial factors such as quality and productivity improvement are identified in the management of small and medium-sized venture companies utilizing their technological innovation capabilities is contrary to a number of studies by those corporate innovation activities affect financial performance during prior research. The reason for this result is that research companies have been out of start-up companies for more than seven years, but sales are less than 10 billion won, and unlike start-up companies, R&D and learning capabilities have more positive effects on intangible non-financial performance than financial performance. Corporate innovation activities have been shown to have a positive (+) impact on both financial and non-financial performance, while R&D and learning capabilities have a positive (+) impact on financial performance by parameters of network capability. Corporate innovation activities have been shown to have no impact on both financial and non-financial performance, and R&D and learning capabilities have no impact on non-financial performance. It could be seen that the parameter effects of network competency are limited to when R&D and learning competencies are derived from quantitative financial performance. It could be seen that the parameter effects of network competency are limited to when R&D and learning competencies are derived from quantitative financial performance.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

An Analysis of BMD Changes With Preoperative and Postoperative Premenopausal Breast Cancer Patient (폐경 전 유방암 환자의 치료 전.후 골밀도 변화 분석)

  • Kim, Su-Jin;Son, Soon-Yong;Choi, Kwan-Woo;Lee, Joo-Ah;Min, Jung-Whan;Kim, Hyun-Soo;Ma, Sang-Chull;Lee, Jong-Seok;Yoo, Beong-Gyu
    • Journal of radiological science and technology
    • /
    • v.37 no.4
    • /
    • pp.279-286
    • /
    • 2014
  • The purpose of this study is to provide basic data of comparing BMD(bone mineral density) value of preoperative breast cancer patient and postoperative breast cancer patient due to bone loss with radiation/chemical therapy. The participants consisted of 254 breast cancer patients with BMD after having surgery and treatment from March 2007 to September 2013. Except for 84 patients with menopause or hysterectomy and we have analysed 171 patients. The BMD value(lumbar spine and femur) of before and after treatment from PACS by dure-energy X-ray absorptiometry was analyzed. First, we found variation of entire BMD and BMD according to treatment type, and analyzed detailed correlation by using marital status, number of children, presence of feeding, age of menarche, breast cancer therapy types as variable. Data was analyzed by using SPSS for Windows Program(version 18.0). BMD was decreased 7.1% in lumbar spine, 3.1% in femur respectively(p<.01). Also there is relatively high decrement($0.067g/cm^2$) in group who had just chemotherapy in femur(p<.05). There is decrement depend on marital status, number of children, presence of feeding, age of menarche, breast cancer therapy types but there was no statistical significance. The results show that BMD was decreased after treatment in premenopausal breast cancer patient, patient who had relatively high decrement need to be included high-risk group. As a result, aggressive prevention policy would be necessary.

아까시나무(Robinia pseudo-acacia)종자 단백질의 전기 영동 변이

  • 김창호;이호준;김용옥
    • The Korean Journal of Ecology
    • /
    • v.16 no.4
    • /
    • pp.515-526
    • /
    • 1993
  • In order to study the ecotypic variation of Rohinia pseudo-acacia L. distributed in southern area of Korean peninsula, 15 local populations(Daejin, Sokcho, Kangneung, Mt. Surak, Hongcheon, Kwangneung, Namhansanseong, Chungju, Yesan, Andong, Jeonju, Dalseong, Changweon, Mokpo and Wando), located from $34^{\circ}18'N\;to\;38^{\circ}36'N$, were selected based on the latitudes and geographical distances. Seeds of these populations were collected and protein contents of seeds and their band patterns were investigated. The seed proteins of all populations were electrophoresed on SDS-polyacrylamide gel. Total number of protein bands were 35, whose molecular weights ranged from 17, 258 daltons to 142, 232 daltons. The number of bands of seed proteins was 23 in Dalseong and Hongcheon and was 32 in Daejin and Sokcho, showing an increasing tendency in the number of bands as the latitude goes high. The local populations were classified into 3 local types based on protein analysis: the middle north east coastal type(Daejin, Sokcho. Kangneung), the central type (Mt. Surak, Hongcheon, Kwangneung, Namhansanseong, Chungju) and the southern type(Yesan, Andong, Jeonju, Dalseong, Changweon, Mokpo, Wando). According to the results of cluster analysis by UPGMA based on the similarity index(c0efficient of Jaccard) of the patterns, 3 local types were subdivided further into 6 types: the middle north east coastal type(Sokcho, Kangneung), the north central type I (Mt. Surak, Hongcheon), the north central type II (Narnhansanseong, Chungju, Daejin), the north central type III (Kwangneung), the south central type (Yesan, Dalseong, Jeonju) and the southern type(Andong, Changweon, Mokpo, Dalseong, Wando). The No. 12 band of the separated seed proteins showed the highest colored density in the preparations from all the populations. The No. 11~13 and No. 23~28 bands also showed high densities. As a whole, southern type populations (Changweon, Mokpo, Wando) showed high protein contents and high colored density. Total protein contents of the seeds in each population were variable from 9. 68mg / g (Mt. Surak) to 17.30mg/g (Jeonju), showing an increasing trends toward low latitudes.

  • PDF

Diversity, Spatial Distribution and Ecological Characteristics of Relict Forest Trees in South Korea (한국 산림유존목의 다양성, 공간 분포 및 생태 특성)

  • CHO, Hyun-Je;Lee, Cheol-Ho;Shin, Joon-Hwan;Bae, Kwan-Ho;Cho, Yong-Chan;Kim, Jun-Soo
    • Journal of Korean Society of Forest Science
    • /
    • v.105 no.4
    • /
    • pp.401-413
    • /
    • 2016
  • Forest resources utilization and variable disturbance history have been affected the rarity and conservation value of forest relict trees, which served as habitat for forest biodiversity, important carbon stock and cultural role include human and natural history in South Korea. This study was conducted to establish the baseline data for forest resources conservation by clarifying species diversity, spatial distribution and ecological characteristics (individual and habitat) of forest relict trees (DBH > 300 cm) based on the data getting from mountain trail, high resolution aerial photos and field professionals and field survey. As results, 54 taxa (18 family 32 genus 48 species 1 subspecies 3 variety and 2 form) as about 22% of tree species in Korea was identified in the field. 837 individuals of forest relict trees were observed and the majority of the trees was in Pinaceae, deciduous Fagaceae and Rosaceae, which families are abundant in population diversity. High elevation area was important to relict trees as mean altitudinal distribution was 1,200 m a.s.l as likely affected by human activity gradients and mid-steep slope and North aspect was important environment for the trees remain. Many individuals exhibited 'damage larger branch' (55.6%) and consequent relatively lower mean canopy coverages (below 80%). Synthetically, present diversity and abundance of relict forest trees in South Korea were the result of complex process among climate variation, local weather and biological factors and the trees of big and old were estimated to important forest biodiversity elements. In the future, clarifying the role and function of relict trees in forest ecosystem, in- and ex- situ programmes for important trees and habitat, and activities for building the background of conservation policy such as "Guideline for identifying and measurement of forest relict trees".