• Title/Summary/Keyword: 비편향성

Search Result 121, Processing Time 0.023 seconds

An Interpretation of the Gaps between 'Fact' and 'Oral Materials' in Political Elite Oral History ('사실'과 '구술자료'의 간극에 대한 하나의 해석 정치엘리트 구술연구를 중심으로)

  • Jo, Young-Jae
    • The Korean Journal of Archival Studies
    • /
    • no.43
    • /
    • pp.43-70
    • /
    • 2015
  • The value and validity of elite oral materials have been questioned because of their gaps with 'fact'. The purpose of this article is to analyze these gaps and to propose some solutions that can reduce the gaps. According to the analysis of this article, there are three types of the gaps that qualitatively differ from each other. The first type of the gaps is produced in the process of generation of memory. This type is made because informants cognize and memorize the facts that exist outside themselves. Selective cognition, selective memory and individual experience come under this category. The second type is produced in the process of preserving the memory. Forgetting and memory transformation are main examples of this type. The third type is produced in the process of the interviews with the informants. False statements or lies fall into this category. The main conclusions are as follows. 1) all gaps in oral materials are not necessarily negative. It is because Some of these gaps- the first and the second type- are not only parts of the real world but also very useful for interpreting the world. 2) The third type of the gaps are very harmful and it is need to be eradicated or reduced. For this, this article proposes some solutions.

Development of Fire Detection Model for Underground Utility Facilities Using Deep Learning : Training Data Supplement and Bias Optimization (딥러닝 기반 지하공동구 화재 탐지 모델 개발 : 학습데이터 보강 및 편향 최적화)

  • Kim, Jeongsoo;Lee, Chan-Woo;Park, Seung-Hwa;Lee, Jong-Hyun;Hong, Chang-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.320-330
    • /
    • 2020
  • Fire is difficult to achieve good performance in image detection using deep learning because of its high irregularity. In particular, there is little data on fire detection in underground utility facilities, which have poor light conditions and many objects similar to fire. These make fire detection challenging and cause low performance of deep learning models. Therefore, this study proposed a fire detection model using deep learning and estimated the performance of the model. The proposed model was designed using a combination of a basic convolutional neural network, Inception block of GoogleNet, and Skip connection of ResNet to optimize the deep learning model for fire detection under underground utility facilities. In addition, a training technique for the model was proposed. To examine the effectiveness of the method, the trained model was applied to fire images, which included fire and non-fire (which can be misunderstood as a fire) objects under the underground facilities or similar conditions, and results were analyzed. Metrics, such as precision and recall from deep learning models of other studies, were compared with those of the proposed model to estimate the model performance qualitatively. The results showed that the proposed model has high precision and recall for fire detection under low light intensity and both low erroneous and missing detection capabilities for things similar to fire.

Meditating effect of Planned Happenstance Skills between the Belief in Good luck and Entrepreneurial Opportunity (행운에 대한 신념과 창업 기회 역량과의 관계에서 우연기술의 매개효과에 관한 연구)

  • Hwangbo, Yun;Kim, YoungJun;Kim, Hong-Tae
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.5
    • /
    • pp.79-92
    • /
    • 2019
  • When asked about the success factors of successful entrepreneurs and celebrities, he says he was lucky. The remarkable fact is that the attitude about luck is different. However, despite the fact that the belief that we believe is lucky is actually a dominant concept, there has not been much scientific verification of luck. In this study, we saw good luck not being determined randomly by the external environment, but by being able to control luck through the internal attributes of individuals. This study is significant that we have empirically elucidated what kind of efforts have gained good luck, whereas previous research has largely ended in vague logic where luck ends up with an internal locus of control among internal entrepreneurial qualities and efforts can make a successful entrepreneur. We introduced the concept of good luck belief to avoid confirmation bias, which is, to interpret my experience in a direction that matches what I want to believe, and used a good luck belief questionnaire in previous studies and tried to verify that those who have a good belief can increase entrepreneurial opportunity capability through planned happenstance skills. The reason for choosing the entrepreneurial opportunity capacity as a dependent variable was based on the conventional research, that is, the process of recognizing and exploiting the entrepreneurial opportunity is an important part of the entrepreneurship research For empirical research, we conducted a questionnaire survey of a total of 332 people, and the results of the analysis turned out that the belief of good luck has all the positive impacts of planned happenstance skills' sub-factors: curiosity, patience, flexibility, optimism and risk tolerance. Second, we have shown that only the perseverance, optimism, and risk tolerance of planned happenstance skills' sub-factors have a positive impact on this opportunity capability. Thirdly, it was possible to judge that the sub-factors of planned happenstance skills, patience, optimism, and risk tolerance, had a meditating effect between belief in luck and entrepreneurial opportunity capability. This study is highly significant in logically elucidating that people in charge of business incubation and education can get the specific direction when planning a training program for successful entrepreneur to further enhance the entrepreneurial opportunity ability, which is an important ability for the entrepreneur's success.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Relationships between Nailfold Plexus Visibility, and Clinical Variables and Neuropsychological Functions in Schizophrenic Patients (정신분열병 환자에서 손톱 주름 총 시도(叢 視度) (Nailfold Plexus Visibility)와 임상양상, 신경심리 기능과의 관계)

  • Kang, Dae-Yeob;Jang, Hye-Ryeon
    • Korean Journal of Biological Psychiatry
    • /
    • v.9 no.1
    • /
    • pp.50-61
    • /
    • 2002
  • Objectives:High nailfold plexus visibility can reflect central nervous system defects as an etiologic factor of schizophrenia indirectly. Previous studies suggest that this visibility is particularly related to the negative symptoms of schizophrenia and frontal lobe deficiency. In this study, we examined the relationships between nailfold plexus visibility, and various clinical variables and neuropsychological functions in schizo-phrenic patients. Methods:Forty patients(21males, 19 females) satisfying the DSM-IV criteria for schizophrenia and thirty eight normal controls(20 males, 18 females) were measured for Plexus Visualization Score(PVS) by using the capillary microscopic examination. For the assessment of psychopathology, process-reactivity, premorbid adjustment, and neuropsychological functions, we used Positive and Negative Syndrome Scale(PANSS), Ullmann-Giovannoni Process-Reactive Questionnaire(PRQ), Phillips Premorbid Adjustment Scale(PAS), Korean Wechsler Adult Intelligence Scale(KWIS), Continuous Performance Test(CPT), Wisconsin Card Sort Test (WCST), and Word Fluency Test. We also collected data about clinical variables. Results:PVS was correlated with PANSS positive symptom score and composite score negatively. There were no correlations between PVS and PRQ score, PAS score and neuropsychological variables respectively. Conclusions:This study showed that nailfold plexus visibility was a characteristic feature in some schizophrenic patients, and that higher plexus visibility was associated with the negative symptoms of schizophrenia. There was no association between plexus visibility and neuropsychological functions.

  • PDF

Mineralogical Characterization of the Chuncheon Nephrite: Mineral Facies, Mineral Chemistry and Pyribole Structure (춘천 연옥 광물의 광물학적 특성 : 광물상, 광물 화학 및 혼성 격자 구조)

  • Noh, Jin Hwan;Cho, Hyen Goo
    • Journal of the Mineralogical Society of Korea
    • /
    • v.6 no.2
    • /
    • pp.57-79
    • /
    • 1993
  • Chuncheon nephrite, which was formed by the polymetasomatic alteration of dolomitic marble, can be classified into pale green, green, dark green, and grey types on the basis of their occurrence, mineralogical and textural characteristics. The nephrites consist obiefly of fibrous or hairlike(length/width ratio>10) cryptocrystalline(crystal width < $2{\mu}m$) tremolite, and include less amounts of micro-crystalline diopside, calcite, clinochlore, and sphene as impurities. The oriented and rather curved crystal aggregate, of nephritic tremolite are densely interwoven, resulting in a massive-fibrous texture which may explain the characteristic toughness of nephritic jade. The characteristic greenish color of the nephrite may be preferably related to Fe rather than Cr and Ni. However, the variation of color and tint in the Chuncheon nephrite also depends on the mineralogical and textural differences such as crystallinity, texture, and impurities. The chemical composition of the nephritic tremolite is not stoichiometric and rather dispersed especially in the abundances of Al, Mg, and Ca. Al content and Mg/Ca ratio for the nephritic tremolite are slightly increased with deepening in greenish color of the nephrite. Fe content in the nephritic tremolite is generally very low, but comparatively richer in the dark green nephrite. In nephritic tremolite, wide-chain pyriboles are irregularly intervened between normal double chains, forming a chain-width disorder. Most nephritic tremolites in the Chuncheon nephrite show various type of chain-width defects such as triple chain(jimthompsonite), quintuple chain (chesterite), or sometimes quadruple chain in HRTEM observations. The degree of chain-width disorder in the nephritic tremolite tends to increase with deepening in greenish color. Triple chain is the most common type, and quadruple chain is rarely observed only in the grey nephrite. The presence of pyribole structure in the nephritic tremolite is closely related to the increase of Al content and Mg/Ca ratio, a rather dispersive chemical composition, a decrease of relative intensity in (001) XRD reflection, and an increase in b axis dimension of unit cell. In addition, the degree and variation of chain-width disorder with nephrite types may support that an increase of metastability was formed by a rapid diffusion of Mg-rich fluid during the nephrite formation.

  • PDF

A Study on the Eco-friendly Housing in the Near future based on the Ecological Design (생태학적 디자인을 기반으로 한 근 미래형 친환경주택연구)

  • Choo, Jin;Yoo, Bo-Hyeon
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.105-118
    • /
    • 2005
  • Housing environment for human beings has been diversified and more convenient due to the development of high technology and civilization brought by industrialization in the 20th century. In the 21st century, how to overcome the ecological limit of biased development-centered advancement, that is, how to preserve and hand over a clean and healthy 'sustainable environment' to our next generations has been one of the most-talked about issues. Environmental symbiosis means a wider range of environmental harmony from micro-dimensional perspective to macro one. The three goals of a environmentally friendly house are to preserve global environment, to harmonize with the environment around, and to offer a healthy and comfortable living environment. From the point of view of environmental symbiosis, houses should be designed to save energy and natural resources for preservation of global environment, to collect such natural energy resources as solar heat and wind force, to recycle waste water, and recycle and reduce the amount of the waste matter. Now, the environmentally-friendly house became a new social mission that is difficult to not only challenge but also realize without conversion to a new paradigm, ecologism.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.

Redefinition of the Concept of Fishing Vessel and Legislation Adjustment (낚시어선 개념의 재정립과 법제 정비에 관한 연구)

  • Yeong-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.639-652
    • /
    • 2023
  • The fundamental background behind the introduction of the fishing vessel system is to allow petty small fishers to engage in pure fishery business activities with fishing vessels during normal times and engage in fishing vessel business only during specific periods (closed fishing season, etc.) thereby granting a qualification as an auxiliary tool for the economic activities of petty small fishers. In addition, fishing boats are allowed to engage in excursion ship activities using fishing vessels registered under the Fishing Vessels Act, the form of fishing vessels should also have a general and universal structure that is practically easy to engage in fishing activities in the field in accordance with the relevant regulations. However, most fishing vessel proprietors are currently focusing only on increasing income, and rather than building fishing vessels in a reasonable form suitable for the original purpose of general fishing vessels, they prefer an abnormal hull form equivalent to expediency, that is biased hull structure biased toward the fishing vessel business. As a result, it is causing serious problems in safety management as well as conflict [damaging relative equity in government support measures (tax-free oil supply, etc.), and depletion of livelihood-type fish stocks] with fishing vessel forces who consider the fishing vessel business only to be a part of the side job among all fishery business activities. Meanwhile, the most fundamental cause of this problem is that the current Fishing Management and Promotion Act, limits the concept of fishing vessels to fishing vessels registered under the Fishing Vessels Act, and applies survey standards accordingly. Accordingly, in this study, through analysis of the distribution status of fishing vessels, structural characteristics, operation status of fishing vessels, and the government's fishing promotion policies, etc., the relevant laws (regulations) have been reorganized to suit the current reality of the concept of fishing vessels to separate the current fishing vessel from fishing vessels and operate it as a fishing-only vessel.