• Title/Summary/Keyword: 최상

Search Result 5,597, Processing Time 0.035 seconds

A Study on Aviation Safety and Third Country Operator of EU Regulation in light of the Convention on international Civil Aviation (시카고협약체계에서의 EU의 항공법규체계 연구 - TCO 규정을 중심으로 -)

  • Lee, Koo-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.67-95
    • /
    • 2014
  • Some Contracting States of the Chicago Convention issue FAOC(Foreign Air Operator Certificate) and conduct various safety assessments for the safety of the foreign operators which operate to their state. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident. FAOC also burdens the other contracting States to the Chicago Convention due to additional requirements and late permission. EASA(European Aviation Safety Agency) is a body governed by European Basic Regulation. EASA was set up in 2003 and conduct specific regulatory and executive tasks in the field of civil aviation safety and environmental protection. EASA's mission is to promote the highest common standards of safety and environmental protection in civil aviation. The task of the EASA has been expanded from airworthiness to air operations and currently includes the rulemaking and standardization of airworthiness, air crew, air operations, TCO, ATM/ANS safety oversight, aerodromes, etc. According to Implementing Rule, Commission Regulation(EU) No 452/2014, EASA has the mandate to issue safety authorizations to commercial air carriers from outside the EU as from 26 May 2014. Third country operators (TCO) flying to any of the 28 EU Member States and/or to 4 EFTA States (Iceland, Norway, Liechtenstein, Switzerland) must apply to EASA for a so called TCO authorization. EASA will only take over the safety-related part of foreign operator assessment. Operating permits will continue to be issued by the national authorities. A 30-month transition period ensures smooth implementation without interrupting international air operations of foreign air carriers to the EU/EASA. Operators who are currently flying to Europe can continue to do so, but must submit an application for a TCO authorization before 26 November 2014. After the transition period, which lasts until 26 November 2016, a valid TCO authorization will be a mandatory prerequisite, in the absence of which an operating permit cannot be issued by a Member State. The European TCO authorization regime does not differentiate between scheduled and non-scheduled commercial air transport operations in principle. All TCO with commercial air transport need to apply for a TCO authorization. Operators with a potential need of operating to the EU at some time in the near future are advised to apply for a TCO authorization in due course, even when the date of operations is unknown. For all the issue mentioned above, I have studied the function of EASA and EU Regulation including TCO Implementing Rule newly introduced, and suggested some proposals. I hope that this paper is 1) to help preparation of TCO authorization, 2) to help understanding about the international issue, 3) to help the improvement of korean aviation regulations and government organizations, 4) to help compliance with international standards and to contribute to the promotion of aviation safety, in addition.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Postoperstive Chemoradiotherapy in Locally Advanced Rectal Cancer (국소 진행된 직장암에서 수술 후 화학방사선요법)

  • Chai, Gyu-Young;Kang, Ki-Mun;Choi, Sang-Gyeong
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.221-227
    • /
    • 2002
  • Purpose : To evaluate the role of postoperative chemoradiotherapy in locally advanced rectal cancer, we retrospectively analyzed the treatment results of patients treated by curative surgical resection and postoperative chemoradiotherapy. Materials and Methods : From April 1989 through December 1998, 119 patients were treated with curative surgery and postoperative chemoradiotherapy for rectal carcinoma in Gyeongsang National University Hospital. Patient age ranged from 32 to 73 years, with a median age of 56 years. Low anterior resection was peformed in 59 patients, and abdominoperineal resection in 60. Forty-three patients were AJCC stage II and 76 were stage III. Radiation was delivered with 6 MV X rays using either AP-PA two fields, AP-PA both lateral four fields, or PA both lateral three fields. Total radiation dose ranged from 40 Gy to 56 Gy. In 73 patients, bolus infusions of 5-FU $(400\;mg/m^2)$ were given during the first and fourth weeks of radiotherapy. After completion of radiotherapy, an additional four to six cycles of 5-FU were given. Oral 5-FU (Furtulone) was given for nine months in 46 patients. Results : Forty $(33.7\%)$ of the 119 patients showed treatment failure. Local failure occurred in 16 $(13.5\%)$ patients, 1 $(2.3\%)$ of 43 stage II patients and 15 $(19.7\%)$ of 76 stage III patients. Distant failure occurred in 31 $(26.1\%)$ patients, among whom 5 $(11.6\%)$ were stage II and 26 $(34.2\%)$ were stage III. Five-year actuarial survival was $56.2\%$ overall, $71.1\%$ in stage II patients and $49.1\%$ in stage III patients (p=0.0008). Five-year disease free survival was $53.3\%$ overall, $68.1\%$ in stage II and $45.8\%$ in stage III (p=0.0006). Multivariate analysis showed that T stage and N stage were significant prognostic factors for five year survival, and that T stage, N stage, and preoperative CEA value were significant prognostic factors for five year disease free survival. Bowel complication occurred in 22 patients, and was treated surgically in 15 $(12.6\%)$, and conservatively in 7 $(5.9\%)$. Conclusion : Postoperative chemoradiotherapy was confirmed to be an effective modality for local control of rectal cancer, but the distant failure rate remained high. More effective modalities should be investigated to lower the distant failure rate.

Treatment Results of CyberKnife Radiosurgery for Patients with Primary or Recurrent Non-Small Cell Lung Cancer (원발 혹은 재발성 비소세포 폐암 환자에서 사이버나이프률 이용한 체부 방사선 수술의 치료 결과)

  • Kim, Woo-Chul;Kim, Hun-Jung;Park, Jeong-Hoon;Huh, Hyun-Do;Choi, Sang-Huoun
    • Radiation Oncology Journal
    • /
    • v.29 no.1
    • /
    • pp.28-35
    • /
    • 2011
  • Purpose: Recently, the use of radiosurgery as a local therapy in patients with early stage non-small cell lung cancer has become favored over surgical resection. To evaluate the efficacy of radiosurgery, we analyzed the results of stereotactic body radiosurgery in patients with primary or recurrent non-small cell lung cancer. Materials and Methods: We reviewed medical records retrospectively of total 24 patients (28 lesions) with non-small cell lung cancer (NSCLC) who received stereotactic body radiosurgery (SBRT) at Inha University Hospital. Among the 24 patients, 19 had primary NSCLC and five exhibited recurrent disease, with three at previously treated areas. Four patients with primary NSCLC received SBRT after conventional radiation therapy as a boost treatment. The initial stages were IA in 7, IB in 3, IIA in 2, IIB in 2, IIIA in 3, IIIB in 1, and IV in 6. The T stages at SBRT were T1 lesion in 13, T2 lesion in 12, and T3 lesion in 3. 6MV X-ray treatment was used for SBRT, and the prescribed dose was 15~60 Gy (median: 50 Gy) for PTV1 in 3~5 fractions. Median follow up time was 469 days. Results: The median GTV was 22.9 mL (range, 0.7 to 108.7 mL) and median PTV1 was 65.4 mL (range, 5.3 to 184.8 mL). The response rate at 3 months was complete response (CR) in 14 lesions, partial response (PR) in 11 lesions, and stable disease (SD) in 3 lesions, whereas the response rate at the time of the last follow up was CR in 13 lesions, PR in 9 lesions, SD in 2 lesions, and progressive disease (PD) in 4 lesions. Of the 10 patients in stage 1, one patient died due to pneumonia, and local failure was identified in one patient. Of the 10 patients in stages III-IV, three patients died, local and loco-regional failure was identified in one patient, and regional failure in 2 patients. Total local control rate was 85.8% (4/28). Local recurrence was recorded in three out of the eight lesions that received below biologically equivalent dose 100 $Gy_{10}$. Among 20 lesions that received above 100 $Gy_{10}$, only one lesion failed locally. There was a higher recurrence rate in patients with centrally located tumors and T2 or above staged tumors. Conclusion: SBRT using a CyberKnife was proven to be an effective treatment modality for early stage patients with NSCLC based on high local control rate without severe complications. SBRT above total 100 $Gy_{10}$ for peripheral T1 stage patients with NSCLC is recommended.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

GENERAL STRATIGRAPHY OF KOREA (한반도층서개요(韓半島層序槪要))

  • Chang, Ki Hong
    • Economic and Environmental Geology
    • /
    • v.8 no.2
    • /
    • pp.73-87
    • /
    • 1975
  • Regional unconformities have been used as boundaries of major stratigraphic units in Korea. The term "synthem" has already been propsed for formal unconformity-bounded stratigraphic units of maximum magnitude (ISSC, 1974). The unconformity-based classification of the strata in the cratonic area in Korea comprises in ascending order the Kyerim, $Sangw{\check{o}}n$, $Jos{\check{o}}n$, $Py{\check{o}}ngan$, Daedong, and $Ky{\check{o}}ngsang$ Synthems, and the Cenozoic Erathem. The unconformites separating them from each other are either orogenic or epeirogenic (and vertical tectonic). The sub-$Sangw{\check{o}}n$ unconformity is a non-conformity above the basement complex in Korea. The unconformities between the $Sangw{\check{o}}n$, $Jos{\check{o}}n$, and $Py{\check{o}}ngan$ Synthems are disconformities denoting late Precambrian and Paleozoic crustal quiescence in Korea. The unconformities between the $Py{\check{o}}ngan$, Daedong, and $Ky{\check{o}}ngsang$ Synthems are angular unconformities representing Mesozoic orogenies. The bounding unconformities of the $Ky{\check{o}}ngsang$ Synthem involve non-conformable parts overlying the Jurassic and late Cretaceous granitic rocks.

  • PDF