• Title/Summary/Keyword: decision

Search Result 19,043, Processing Time 0.047 seconds

The Relationship between Financial Constraints and Investment Activities : Evidenced from Korean Logistics Firms (우리나라 물류기업의 재무제약 수준과 투자활동과의 관련성에 관한 연구)

  • Lee, Sung-Yhun
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.2
    • /
    • pp.65-78
    • /
    • 2024
  • This study investigates the correlation between financial constraints and investment activities in Korean logistics firms. A sample of 340 companies engaged in the transportation sector, as per the 2021 KSIC, was selected for analysis. Financial data obtained from the DART were used to compile a panel dataset spanning from 1996 to 2021, totaling 6,155 observations. The research model was validated, and tests for heteroscedasticity and autocorrelation in the error terms were conducted considering the panel data structure. The relationship between investment activities in the previous period and current investment activities was analyzed using panel Generalized Method of Moments(GMM). The validation results of the research indicate that Korean logistics firms tend to increase investment activities as their level of financial constraints improves. Specifically, a positive relationship between the level of financial constraints and investment activities was consistently observed across all models. These findings suggest that investment decision-making varies based on the financial constraints faced by companies, aligning with previous research indicating that investment activities of constrained firms are subdued. Moreover, while the results from the model examining whether investment activities in the previous period affect current investment activities indicated an influence of investment activities from the previous period on current investment activities, the investment activities from two periods ago did not show a significant relationship with current investment activities. Among the control variables, firm size and cash flow variables exhibited positive relationships, while debt size and asset diversification variables showed negative relationships. Thus, larger firm size and smoother cash flows were associated with more proactive investment activities, while high debt levels and extensive asset diversification appeared to constrain investment activities in logistics companies. These results interpret that under financial constraints, internal funding sources such as cash flows exhibit positive relationships, whereas external capital sources such as debt demonstrate negative relationships, consistent with empirical findings from previous research.

A Study on the Development and Validation of Three Systems of Action Scale in Home Economics for Middle and High School Students (중⋅고등학생용 가정교과 세 행동체계 척도 개발 및 타당화 연구)

  • Choi, Seong Youn
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.3
    • /
    • pp.67-96
    • /
    • 2023
  • The purpose of this study was to develop and validate a scale that can grasp the reality of the three systems of action for middle and high school students in home economics. For this purpose, a total of 105 questions, 35 questions for each systems of action, were developed as a 5-point Likert scale in order to measure technical action, communicative action, and emancipative action as preliminary questions by reviewing domestic and international literature related to the three systems of action. The procedure for revising and supplementing the developed preliminary questions by reviewing the content validity of the home economics education expert was executed twice. A preliminary survey was conducted on middle and high school students with 70 developed preliminary questions, and 166 copies were collected. As a result of exploratory factor analysis of the collected questionnaires to test the validity of the scale, it was found that 38 questions 7 factors were appropriate. After constructing this survey based on the results of exploratory factor analysis, this survey was conducted on middle and high school students, and 548 copies were collected and a confirmatory factor analysis was performed. A total of 38 questions were finally selected through confirmatory factor analysis, including basic living ability 5 questions, self-management ability 4 questions, information processing ability 4 questions, communication/interpersonal ability 12 questions, critical thinking ability 3 questions, decision-making ability 7 questions, empowerment 3 questions. The Model Fit was χ2=1846.741(p<.001), CFI=0.865, TLI=0.853, RMSEA=0.058, and the Standardized Regression Weights for each question was more than 0.5, so it can be seen as a suitable measurement instrument for measuring the status of the three systems of action of middle and high school students in home economics. The three systems of action scales were found to have significant correlations with self-acceptance, future planning, intimacy, uniqueness, which are sub-factors of the self-identity scale, and social participation scales therefore confirmed that they have recognized concurrent validity.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

A Study on Improvements on Legal Structure on Security of National Research and Development Projects (과학기술 및 학술 연구보고서 서비스 제공을 위한 국가연구개발사업 관련 법령 입법론 -저작권법상 공공저작물의 자유이용 제도와 연계를 중심으로-)

  • Kang, Sun Joon;Won, Yoo Hyung;Choi, San;Kim, Jun Huck;Kim, Seul Ki
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2015.05a
    • /
    • pp.545-570
    • /
    • 2015
  • Korea is among the ten countries with the largest R&D budget and the highest R&D investment-to-GDP ratio, yet the subject of security and protection of R&D results remains relatively unexplored in the country. Countries have implemented in their legal systems measures to properly protect cutting-edge industrial technologies that would adversely affect national security and economy if leaked to other countries. While Korea has a generally stable legal framework as provided in the Regulation on the National R&D Program Management (the "Regulation") and the Act on Industrial Technology Protection, many difficulties follow in practice when determining details on security management and obligations and setting standards in carrying out national R&D projects. This paper proposes to modify and improve security level classification standards in the Regulation. The Regulation provides a dual security level decision-making system for R&D projects: the security level can be determined either by researcher or by the central agency in charge of the project. Unification of such a dual system can avoid unnecessary confusions. To prevent a leakage, it is crucial that research projects be carried out in compliance with their assigned security levels and standards and results be effectively managed. The paper examines from a practitioner's perspective relevant legal provisions on leakage of confidential R&D projects, infringement, injunction, punishment, attempt and conspiracy, dual liability, duty of report to the National Intelligence Service (the "NIS") of security management process and other security issues arising from national R&D projects, and manual drafting in case of a breach. The paper recommends to train security and technological experts such as industrial security experts to properly amend laws on security level classification standards and relevant technological contents. A quarterly policy development committee must also be set up by the NIS in cooperation with relevant organizations. The committee shall provide a project management manual that provides step-by-step guidance for organizations that carry out national R&D projects as a preventive measure against possible leakage. In the short term, the NIS National Industrial Security Center's duties should be expanded to incorporate national R&D projects' security. In the long term, a security task force must be set up to protect, support and manage the projects whose responsibilities should include research, policy development, PR and training of security-related issues. Through these means, a social consensus must be reached on the need for protecting national R&D projects. The most efficient way to implement these measures is to facilitate security training programs and meetings that provide opportunities for communication among industrial security experts and researchers. Furthermore, the Regulation's security provisions must be examined and improved.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

A Survey of Korean Consumers' Awareness on Animal Welfare of Laying Hens (산란계 동물복지에 대한 국내 소비자의 인지도 조사)

  • Hong, Eui-Chul;Kang, Hwan-Ku;Park, Ki-Tae;Jeon, Jin-Joo;Kim, Hyun-Soo;Kim, Chan-Ho;Kim, Sang-Ho
    • Korean Journal of Poultry Science
    • /
    • v.45 no.3
    • /
    • pp.219-228
    • /
    • 2018
  • This study was conducted twice to investigate egg purchase behavior and perception on animal welfare of Korean consumers. This study included women, who were the main decision makers and caretakers in the household, and men with one-person household. This survey was conducted with by the Computer Assisted Web Interview and Gang Survey methods. On the key considerations factor, the highest response rate was considered to be 'price', and the response rate of considering 'packing date' increased in the second survey. At a reasonable price based on 10 eggs, the response rate was the highest at 53.8% and 42.9% in both the first and second surveys and the appropriate price averages were 2,482 won and 2,132 won, respectively. The highest rate of purchase of egg consumers from 'Large Mart' followed by 'Medium sized supermarket' and 'Chain supermarket'. As for the awareness about animal welfare, the recognition ratio (73.5%) was higher in the result of the second survey than the first. The cognitive period of animal welfare was 59.0% before the insecticide egg crisis and 41.0% thereafter. Regarding whether or not they have ever seen an animal welfare certification mark and an animal welfare animal farm certification mark, 59.6% of respondents said that they saw it for the first time and 37.6% answered that they knew the animal welfare certification mark. On the animal welfare system, the 'free-range' response rate was the highest at 85.8%. The 'free-range' fit response decreased by 34.2%p, while the 'barn' and 'European type' fit response increased by 13.2%p and 24.1%p, respectively. The number of 'I have never seen' and 'I have ever eaten' responses to the recognition and eating experience of animal welfare certified eggs decreased while the number of those who answered 'Have ever seen' and 'Have eaten' increased. The answer of purchasing animal welfare certified eggs at department stores, organic farming cooperatives, and internet shopping malls was higher than that of buying conventional eggs. Of the total respondents, 92.0% were willing to purchase an animal welfare egg before the price was offered, but after offering the prices of animal welfare eggs, the intention to purchase was 62.7%, which was about 30%p lower than before. The reason for purchasing an animal welfare certified egg was the highest score of 71.0% for 'I think it is likely to be high in food safety', and 38.1% for 'I think the price is high' for lack of intention to purchase. In the sensory evaluation of animal welfare eggs, egg color and skin texture of conventional eggs were significantly higher than those of certified welfare eggs (P<0.05), and boiled eggs showed that egg whites of animal welfare certified eggs were more (P<0.05). As a result, the results of this study will contribute to the activation of the animal welfare certification system for laying hens by providing basic data on consumer awareness to animal welfare certified farmers.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

LINAC-based Stereotactic Radiosurgery for Meningiomas (수막종에 대한 선형가속기형 정위방사선수술)

  • Shin Seong Soo;Kim Dae Yong;Ahn Yong Chan;Lee Jung Il;Nam Do-Hyun;Lim Do Hoon;Huh Seung Jae;Yeo Inhwan J;Shin Hyung Jin;Park Kwan;Kim BoKyoung;Kim Jong Hyun
    • Radiation Oncology Journal
    • /
    • v.19 no.2
    • /
    • pp.87-94
    • /
    • 2001
  • Purpose : To evaluate the role of LINAC-based stereotactic radiosurgery (SRS) in the management of meningiomas, we reviewed clinical response, image response, neurological deficits for patients treated at our institution. Methods and materials : Between February 1995 and December 1999, twenty-six patients were treated with SRS. Seven patients had undergone prior resection. Nineteen patients received SRS as the initial treatment. There were 7 male and 19 female patients. The median age was 51 years (range, $14\~67\;years$). At least one clinical symptom presented at the time of SRS in 17 patients and cranial neuropathy was seen in 7 patients. The median tumor volume was $4.7\;cm^3\;(range,\;0.7\~16.5\;m^3)$. The mean marginal dose was 15 Gy (range, $10\~20\;Gy$), delivered to the $80\%$ isodose surface (range, $46\~90\%$). The median clinical and imaging follow-up periods were 27 months (range, 1-71 months) and 25 months (range, $1\~52\;months$), respectively. Results : Of 14 patients who had clinical follow-up of one year or longer, thirteen patients $(93\%)$ were improved clinically at follow-up examination. Clinical symptom worsened in one patient at 4 months after SRS as a result of intratumoral edema, who underwent surgical resection at 7 months. OF 14 patients who had radiologic follow-up of one year or longer, tumor volume decreased in 7 patients $(50\%)$ at a median of 11 months (range, $6\~25\;months$), remained stable in 6 patients $(43\%)$, and increased in one patient $(7\%)$, who underwent surgical resection at 44 months. New radiation-induced neurological deficits developed in six patients $(23\%)$. Five patients $(19\%)$ had transient neurological deficits, completely resolved by conservative treatment including steroid therapy. Radiation-induced brain necrosis developed in one patient $(3.8\%)$ at 9 months after SRS who followed by surgical resection of tumor and necrotic tissue. Conclusions : LINAC-based SRS proves to be an effective and safe management strategy for small to moderate sized meningiomas, inoperable, residual, and recurrent, but long-term follow-up will be necessary to fully evaluate its efficacy. To reduce the radiation-induced neurological deficit for large size meningioma and/or in the proximity of critical and neural structure, more delicate treatment planning and optimal decision of radiation dose will be necessary.

  • PDF

Health Assessment of the Nakdong River Basin Aquatic Ecosystems Utilizing GIS and Spatial Statistics (GIS 및 공간통계를 활용한 낙동강 유역 수생태계의 건강성 평가)

  • JO, Myung-Hee;SIM, Jun-Seok;LEE, Jae-An;JANG, Sung-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.2
    • /
    • pp.174-189
    • /
    • 2015
  • The objective of this study was to reconstruct spatial information using the results of the investigation and evaluation of the health of the living organisms, habitat, and water quality at the investigation points for the aquatic ecosystem health of the Nakdong River basin, to support the rational decision making of the aquatic ecosystem preservation and restoration policies of the Nakdong River basin using spatial analysis techniques, and to present efficient management methods. To analyze the aquatic ecosystem health of the Nakdong River basin, punctiform data were constructed based on the position information of each point with the aquatic ecosystem health investigation and evaluation results of 250 investigation sections. To apply the spatial analysis technique, the data need to be reconstructed into areal data. For this purpose, spatial influence and trends were analyzed using the Kriging interpolation(ArcGIS 10.1, Geostatistical Analysis), and were reconstructed into areal data. To analyze the spatial distribution characteristics of the Nakdong River basin health based on these analytical results, hotspot(Getis-Ord Gi, $G^*_i$), LISA(Local Indicator of Spatial Association), and standard deviational ellipse analyses were used. The hotspot analysis results showed that the hotspot basins of the biotic indices(TDI, BMI, FAI) were the Andong Dam upstream, Wangpicheon, and the Imha Dam basin, and that the health grades of their biotic indices were good. The coldspot basins were Nakdong River Namhae, the Nakdong River mouth, and the Suyeong River basin. The LISA analysis results showed that the exceptional areas were Gahwacheon, the Hapcheon Dam, and the Yeong River upstream basin. These areas had high bio-health indices, but their surrounding basins were low and required management for aquatic ecosystem health. The hotspot basins of the physicochemical factor(BOD) were the Nakdong River downstream basin, Suyeong River, Hoeya River, and the Nakdong River Namhae basin, whereas the coldspot basins were the upstream basins of the Nakdong River tributaries, including Andong Dam, Imha Dam, and Yeong River. The hotspots of the habitat and riverside environment factor(HRI) were different from the hotspots and coldspots of each factor in the LISA analysis results. In general, the habitat and riverside environment of the Nakdong River mainstream and tributaries, including the Nakdong river upstream, Andong Dam, Imha Dam, and the Hapcheon Dam basin, had good health. The coldspot basins of the habitat and riverside environment also showed low health indices of the biotic indices and physicochemical factors, thus requiring management of the habitat and riverside environment. As a result of the time-series analysis with a standard deviation ellipsoid, the areas with good aquatic ecosystem health of the organisms, habitat, and riverside environment showed a tendency to move northward, and the BOD results showed different directions and concentrations by the year of investigation. These aquatic ecosystem health analysis results can provide not only the health management information for each investigation spot but also information for managing the aquatic ecosystem in the catchment unit for the working research staff as well as for the water environment researchers in the future, based on spatial information.