• Title/Summary/Keyword: 판단편향

Search Result 95, Processing Time 0.032 seconds

Langerhans Cell Histiocytosis in the Skull: Comparison of MR Image and Other Images (두개골의 랑게르한스 세포 조직구증: 자기공명영상과 다른 영상과의 비교)

  • Lim, Soo-Jin;Lim, Myung-Kwan;Park, Sun-Won;Kim, Jung-Eun;Kim, Ji-Hye;Kim, Deok-Hwan;Lee, Seok-Lyong;Suh, Chang-Hae
    • Investigative Magnetic Resonance Imaging
    • /
    • v.13 no.1
    • /
    • pp.74-80
    • /
    • 2009
  • Purpose : To evaluate the characteristic MR imaging findings of Langerhans cell histiocytosis (LCH) in the skull and to compare them with those of plain radiography and computed tomography. Materials and Methods : A total of 10 lesions in 9 patients (Age range; 5-42 years, Mean age; 18, all women) with Langerhans cell histiocytosis in the skull were included in our study. Nine lesions in nine patients were histologically confirmed by surgery or fine needle aspiration biopsy. All patients performed with MRI, and plain radiography and CT scan were done in 7 patients (8 lesions). Two experienced neuroradiologists reviewed the radiological examinations independently with attention to location, size, shape and nature of the lesions in the skull and compared the extent and extension of the lesions to adjacent structures. Results : The lesions were distributed in all of the skulls without predilection site. On MRI, the masses were shown as well-enhancing soft tissue masses (10/10) mainly in diploic spaces (8/10) with extension to scalp (9/10) and dura mater (7/10). Dural enhancement (7/10) and thickening (4/10) were seen. The largest diameter of the soft tissue masses ranged 1.1 cm to 6.8 cm, shaped as round (5/10) or oval (5/10). On CT scans, the lesions were presented as soft tissue masses involving diploic space (6/8) and scalp extension (7/8) were also well visualized. Although bony erosion or destruction was more clearly seen on CT rather than those of MRI, enhancement of soft tissue masses and dura were not well visualized on CT. In contrast, all of the lesions in LCH were seen as punched out (4/8), beveled-edge appearance (4/8) osteolytic masses in plain radiography, but scalp and dural extension could not be seen. Conclusion : Characteristic MR findings in patients with LCH are soft tissue mass in diploic space with extension to dura and scalp, and MRI would be better imaging modality than plain radiography or CT.

  • PDF

A Study on the Medical Application and Personal Information Protection of Generative AI (생성형 AI의 의료적 활용과 개인정보보호)

  • Lee, Sookyoung
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.4
    • /
    • pp.67-101
    • /
    • 2023
  • The utilization of generative AI in the medical field is also being rapidly researched. Access to vast data sets reduces the time and energy spent in selecting information. However, as the effort put into content creation decreases, there is a greater likelihood of associated issues arising. For example, with generative AI, users must discern the accuracy of results themselves, as these AIs learn from data within a set period and generate outcomes. While the answers may appear plausible, their sources are often unclear, making it challenging to determine their veracity. Additionally, the possibility of presenting results from a biased or distorted perspective cannot be discounted at present on ethical grounds. Despite these concerns, the field of generative AI is continually advancing, with an increasing number of users leveraging it in various sectors, including biomedical and life sciences. This raises important legal considerations regarding who bears responsibility and to what extent for any damages caused by these high-performance AI algorithms. A general overview of issues with generative AI includes those discussed above, but another perspective arises from its fundamental nature as a large-scale language model ('LLM') AI. There is a civil law concern regarding "the memorization of training data within artificial neural networks and its subsequent reproduction". Medical data, by nature, often reflects personal characteristics of patients, potentially leading to issues such as the regeneration of personal information. The extensive application of generative AI in scenarios beyond traditional AI brings forth the possibility of legal challenges that cannot be ignored. Upon examining the technical characteristics of generative AI and focusing on legal issues, especially concerning the protection of personal information, it's evident that current laws regarding personal information protection, particularly in the context of health and medical data utilization, are inadequate. These laws provide processes for anonymizing and de-identification, specific personal information but fall short when generative AI is applied as software in medical devices. To address the functionalities of generative AI in clinical software, a reevaluation and adjustment of existing laws for the protection of personal information are imperative.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Haptic Perception presented in Picturesque Gardens - With a Focus on Picturesque Garden in Eighteenth-Century England - (픽처레스크 정원에 나타난 촉지적 지각 - 18세기 영국 픽처레스크 정원을 중심으로 -)

  • Kim, Jin-Seob;Kim, Jin-Seon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.44 no.2
    • /
    • pp.37-51
    • /
    • 2016
  • Modern optical mechanisms slanted toward Ocular-centrism have neglected diverse functions of vision, judged objects in abstract and binary perspectives, and organized spaces accordingly, there by neglecting the function of eyes groping objects. Recently, various experiences have been induced through communication with other senses by the complex perception beyond the binary perception system of vision. Haptic perception is dynamic vision that induces accompanying bodily experiences through interaction among the various senses; it recognizes the characteristics of material properties and various sensitive stimulations of human beings. This study elaborates on the major features of haptic perception by examining the theoretical background of this concept, which stimulates the active experience of the subject and determines how characteristics of haptic perception are displayed in picturesque gardens. In order to identify the major features of haptic perception, this study examines how Adolf Hildebrand's theory of vision is developed, expanded, and reinterpreted by Alois Riegl, Wilhelm Worringer, Walter Benjamin, Maurice Merleau Ponty, and Gilles Deleuze in the histories of philosophy and aesthetics. Based thereon, the core differences in haptic perception models and visual perception models are analyzed, and the features of haptic perception are identified. Then, classical gardens are set for visual perception and picturesque gardens are set for haptic perception so that the features from haptic perception identified previously are projected onto the picturesque gardens. The research results drawn from this study regarding features of haptic perception presented in picturesque gardens are as follows. The core differences of haptic perception in contrast to visual perception can be summarized as ambiguity and obscureness of boundaries, generation of dynamic perspectives, induction of motility by indefinite circulation, and strangeness and sublime beauty by the impossibility of perception. In picturesque gardens, the ambiguity and obscureness of boundaries are presented in the irregularity and asymmetric elements of planes and the rejection of a single view, and the generation of dynamic perspectives results from the adoption of narrative structure and overlapping of spaces through the creation of complete views, medium range views, and distant views, which the existing gardens lack. Thus, the scene composition technique is reproduced. The induction of motility by indefinite circulation is created by branching circulation, and strangeness and sublime beauty are presented through the use of various elements and the adoption of 'roughness', 'irregularity', and 'ruins' in the gardens.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Survey on pesticide use by chinese cabbage growers in gangwon alpine farmland (강원도 고냉지대 배추 경작자들의 농약 사용 실태)

  • Kim, Song-Mun;Choi, Hae-Jin;Kim, Hee-Yeon;Lee, Dong-Kyung;Kim, Tae-Han;Ahn, Mun-Sub;Hur, Jang-Hyun
    • The Korean Journal of Pesticide Science
    • /
    • v.6 no.4
    • /
    • pp.250-256
    • /
    • 2002
  • The objective was to know if chinese cabbage growers in Gangwon alpine farmland control agricultural pests including weeds effectively and use pesticide properly. Examiners visited 185 farmers at Taebaek, Pyongchang, and Jeongseon and surveyed 33 questions on pest control methods pesticide use. Chinese cabbage farmers have noxious plant diseases such as clubroot, bacterial soft rot, downy mildew, anthracnose, and mosaic disease, and also noxious insects such as diamondback moth, aphid, beet armyworm, common cabbage worm, and Japanese native slug. In addition, farmers have noxious weeds such as common chickweed, marsh pepper, hairy crabgrass, common purslane, and horseweed. To control diseases and insects, 51.3% of farmers used many chemical agents, while 20.7% of farmers used chemical agents with too much emphasis on paraquat and glyphosate to control weeds: 87.2% of the answered farmers have a preference of the both non-selective herbicides. Farmers in the survey area selected pesticides on the basis of their own experience and sales managers' recommendation (84.2%) which results in the use of inappropriate pesticides such diniconazole. Many farmers have experienced phytotoxicities (46.7%) and pesticide poisoning (51.2%). We conclude that a systematic educational program for the proper selection and use of pesticides should be conducted for chinese cabbage growers in Gangwon alpine farmland.

Experiment of Flexural Behavior of Prestressed Concrete Beams with External Tendons according to Tendon Area and Tendon Force (강선량 및 긴장력에 따른 외부 강선을 가진 PSC 보의 휨거동 실험)

  • Yoo, Sung-Won;Yang, In-Hwan;Suh, Jeong-In
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.4
    • /
    • pp.513-521
    • /
    • 2009
  • Recently, the externally prestressed unbonded concrete structures are increasingly being built. The mechanical behavior of prestressed concrete beams with external unbonded tendon is different from that of normal bonded PSC beams in that the slip of tendons at deviators and the change of tendon eccentricity occurs as external loads are applied in external unbonded PSC beams. The purpose of the present paper is therefore to evaluate the flexural behavior by performing static flexural test according to tendon area and tendon force. From experimental results, before flexural cracking, there was no difference between external members and bonded members. However, after cracking, yielding load of reinforcement, ultimate load, and the tendon stress of external members was lower than that of bonded members. For the relationship of load-tendon stress, the increasing of tendon strain was inversely proportional to the initial tendon force. However, even if the initial tendon force was large, the tendon strain with small effective stress was smaller than that with large effective stress. The concrete compressive strain was proportional to the effective stress of external tendon. From the comparison between test results and codes, the ACI-318 could not consider the effect of tendon force or effective stress, and especially the results of ACI-318 were very small, so it was very conservative. And the AASHTO 1994 could be influenced on the tendon area, initial force and effective stress, but as it was made on the basis of internal unbonded tendon, its results were much larger than the test results. For this reason, the new correct predict equation of external tendon stress will be needed.

Analysis of Research Trends in the Successful Establishment of Venture Companies: with Priority Given to Domestic Articles Between 1998 and 2014 (국내 벤처기업의 창업 성공에 관한 연구동향 분석: 메타분석을 활용하여)

  • Lee, Yong-hee;Hong, Kwang-pyo;Park, Su-hong
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.6
    • /
    • pp.15-26
    • /
    • 2015
  • The purpose of this study was to examine the area, theme and method of domestic articles on the successful establishment of venture companies, which were presented between 1998 and 2014, according to related basic classification criteria in an effort to shed light on the characteristics of these studies. It's basically meant to make a contribution to the development of research on the success of venture business and to be of practical use for the success of venture companies in our country. After related earlier studies were analyzed, 164 articles were collected, and 64 articles were selected from among them. The articles that were overlapped, not presented or not registered were excluded. The related studies had been conducted by different learned societies that were interested in the same research themes, and the studies were primarily led by the Korean Society of Business Venturing as of 2014, especially by Asia-Pacific Journal of Business Venturing and Entrepreneurship that was a journal published by this society. This society seemed to have taken the initiative in this field. There was a tendency that the related studies increased in number due to the shifts of modern industrial society and the changing policies of the government, as the number of the studies has been on the steady rise since 2012. In terms of method, quantitative research methods, especially descriptive studies, were prevailing. To step up the development of research into this field, a wide variety of research methods should be utilized in the future. As for the sphere of research, the characteristics of founders, strategic characteristics and environmental characteristics were mainly covered as the success factors of venture business. More diverse variables should be explored as well to make more advanced, extended research in the years to come. The findings of the study are expected to provide both theoretical and practical information on the establishment and success of venture business to make a contribution to the development of research.

  • PDF

Real-Time 3D Ultrasound Imaging Method Using a Cross Array Based on Synthetic Aperture Focusing: II. Linear Wave Front Transmission Approach (합성구경 기반의 교차어레이를 이용한 실시간 3차원 초음파 영상화 기법 : II. 선형파면 송신 방법)

  • 김강식;송태경
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.5
    • /
    • pp.403-414
    • /
    • 2004
  • In the accompanying paper, we proposed a real. time volumetric imaging method using a cross array based on receive dynamic focusing and synthetic aperture focusing along lateral and elevational directions, respetively. But synthetic aperture methods using spherical waves are subject to beam spreading with increasing depth due to the wave diffraction phenomenon. Moreover, since the proposed method uses only one element for each transmission, it has a limited transmit power. To overcome these limitations, we propose a new real. time volumetric imaging method using cross arrays based on synthetic aperture technique with linear wave fronts. In the proposed method, linear wave fronts having different angles on the horizontal plane is transmitted successively from all transmit array elements. On receive, by employing the conventional dynamic focusing and synthetic aperture methods along lateral and elevational directions, respectively, ultrasound waves can be focused effectively at all imaging points. Mathematical analysis and computer simulation results show that the proposed method can provide uniform elevational resolution over a large depth of field. Especially, since the new method can construct a volume image with a limited number of transmit receive events using a full transmit aperture, it is suitable for real-time 3D imaging with high transmit power and volume rate.

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.