• Title/Summary/Keyword: 토요

Search Result 21,861, Processing Time 0.048 seconds

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

A Study on the Growth Diagnosis and Management Prescription for Population of Retusa Fringe Trees in Pyeongji-ri, Jinan(Natural Monument No. 214) (진안 평지리 이팝나무군(천연기념물 제214호)의 생육진단 및 관리방안)

  • Rho, Jae-Hyun;Oh, Hyun-Kyung;Han, Sang-Yub;Choi, Yung-Hyun;Son, Hee-Kyung
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.3
    • /
    • pp.115-127
    • /
    • 2018
  • This study was attempted to find out the value of cultural assets through the clear diagnosis and prescription of the dead and weakness factors of the Population of Retusa Fringe Trees in Pyeongji-ri, Jinan(Natural Monument No. 214), The results are as follows. First, Since the designation of 13 natural monuments in 1968, since 1973, many years have passed since then. In particular, despite the removal of some of the buried soil during the maintenance process, such as retreating from the fence of the primary school after 2010, Second, The first and third surviving tree of the designated trees also have many branches that are dead, the leaves are dull, and the amount of leaves is small. vitality of tree is 'extremely bad', and the first branch has already been faded by a large number of branches, and the amount of leaves is considerably low this year, so that only two flowers are bloomed. The second is also in a 'bad'state, with small leaves, low leaf density, and deformed water. The largest number 1 in the world is added to the concern that the s coverd oil is assumed to be paddy soils. Third, It is found that the composition ratio of silt is high because it is known as '[silty loam(SiL)]'. In addition, the pH of the northern soil at pH 1 was 6.6, which was significantly different from that of the other soil. In addition, the organic matter content was higher than the appropriate range, which is considered to reflect the result of continuous application for protection management. Fourth, It is considered that the root cause of failure and growth of Jinan pyeongji-ri Population of Retusa Fringe Trees group is chronic syndrome of serious menstrual deterioration due to covered soil. This can also be attributed to the newly planted succession and to some of the deaths. Fifthly, It is urgent to gradually remove the subsoil part, which is estimated to be the cause of the initial damage. Above all, it is almost impossible to remove the coverd soil after grasping the details of the soil, such as clayey soil, which is buried in the rootstock. After removal of the coverd soil, a pestle is installed to improve the respiration of the roots and the ground with Masato. And the dead 4th dead wood and the 5th and 6th dead wood are the best, and the lower layer vegetation is mown. The viable neck should be removed from the upper surface, and the bark defect should undergo surgery and induce the development of blindness by vestibule below the growth point. Sixth, The underground roots should be identified to prepare a method to improve the decompression of the root and the respiration of the soil. It is induced by the shortening of rotten roots by tracing the first half of the rootstock to induce the generation of new roots. Seventh, We try mulching to suppress weed occurrence, trampling pressure, and soil moisturizing effect. In addition, consideration should be given to the fertilization of the foliar fertilizer, the injection of the nutrients, and the soil management of the inorganic fertilizer for the continuous nutrition supply. Future monitoring and forecasting plans should be developed to check for changes continuously.

Growth Efficiency, Carcass Quality Characteristics and Profitability of 'High'-Market Weight Pigs ('고체중' 출하돈의 성장효율, 도체 품질 특성 및 수익성)

  • Park, M.J.;Ha, D.M.;Shin, H.W.;Lee, S.H.;Kim, W.K.;Ha, S.H.;Yang, H.S.;Jeong, J.Y.;Joo, S.T.;Lee, C.Y.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.4
    • /
    • pp.459-470
    • /
    • 2007
  • Domestically, finishing pigs are marketed at 110 kg on an average. However, it is thought to be feasible to increase the market weight to 120kg or greater without decreasing the carcass quality, because most domestic pigs for pork production have descended from lean-type lineages. The present study was undertaken to investigate the growth efficiency and profitability of ‘high’-market wt pigs and the physicochemical characteristics and consumers' acceptability of the high-wt carcass. A total of 96 (Yorkshire × Landrace) × Duroc-crossbred gilts and barrows were fed a finisher diet ad laibtum in 16 pens beginning from 90-kg BW, after which the animals were slaughtered at 110kg (control) or ‘high’ market wt (135 and 125kg in gilts & barrows, respectively) and their carcasses were analyzed. Average daily gain and gain:feed did not differ between the two sex or market wt groups, whereas average daily feed intake was greater in the barrow and high market wt groups than in the gilt and 110-kg market wt groups, respectively(P<0.01). Backfat thickness of the high-market wt gilts and barrows corrected for 135 and 125-kg live wt, which were 23.7 and 22.5 mm, respectively, were greater (P<0.01) than their corresponding 110-kg counterparts(19.7 & 21.1 mm). Percentages of the trimmed primal cuts per total trimmed lean (w/w), except for that of loin, differed statistically (P<0.05) between two sex or market wt groups, but their numerical differences were rather small. Crude protein content of the loin was greater in the high vs. 110-kg market group (P<0.01), but crude fat and moisture contents and other physicochemical characteristics including the color of this primal cut were not different between the two sexes or market weights. Aroma, marbling and overall acceptability scores were greater in the high vs. 110-kg market wt group in sensory evaluation for fresh loin (P<0.01); however, overall acceptabilities for cooked loin, belly and ham were not different between the two market wt groups. Marginal profits of the 135- and 125-kg high-market wt gilt and barrow relative to their corresponding 110-kg ones were approximately -35,000 and 3,500 wons per head under the current carcass grading standard and price. However, if it had not been for the upper wt limits for the A- and B-grade carcasses, marginal profits of the high market wt gilt and barrow would have amounted to 22,000 and 11,000 wons per head, respectively. In summary, 120~125-kg market pigs are likely to meet the consumers' preference better than the 110-kg ones and also bring a profit equal to or slightly greater than that of the latter even under the current carcass grading standard. Moreover, if only the upper wt limits of the A- & B-grade carcasses were removed or increased to accommodate the high-wt carcass, the optimum market weights for the gilt and barrow would fall upon their target weights of the present study, i.e. 135 and 125 kg, respectively.

Comparison of CT based-CTV plan and CT based-ICRU38 plan in brachytherapy planning of uterine cervix cancer (자궁경부암 강내조사 시 CT를 이용한 CTV에 근거한 치료계획과 ICRU 38에 근거할 치료계획의 비교)

  • Shim JinSup;Jo JungKun;Si ChangKeun;Lee KiHo;Lee DuHyun;Choi KyeSuk
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.16 no.2
    • /
    • pp.9-17
    • /
    • 2004
  • Purpose : Although Improve of CT, MRI Radio-diagnosis and Radiation Therapy Planing, but we still use ICRU38 Planning system(2D film-based) broadly. 3-Dimensional ICR plan(CT image based) is not only offer tumor and normal tissue dose but also support DVH information. On this study, we plan irradiation-goal dose on CTV(CTV plan) and irradiation-goal dose on ICRU 38 point(ICRU38 plan) by use CT image. And compare with tumor-dose, rectal-dose, bladder-dose on both planning, and analysis DVH Method and Material : Sample 11 patients who treated by Ir-192 HDR. After 40Gy external radiation therapy, ICR plan established. All the patients carry out CT-image scanned by CT-simulator. And we use PLATO(Nucletron) v.14.2 planing system. We draw CTV, rectum, bladder on the CT image. And establish plan irradiation-$100\%$ dose on CTV(CTV plan) and irradiation-$100\%$ dose on A-point(ICRU38 plan) Result : CTV volume($average{\pm}SD$) is $21.8{\pm}26.6cm^3$, rectum volume($average{\pm}SD$) is $60.9{\pm}25.0cm^3$, bladder volume($average{\pm}SD$) is $116.1{\pm}40.1cm^3$ sampled 11 patients. The volume including $100\%$ dose is $126.7{\pm}18.9cm^3$ on ICRU plan and $98.2{\pm}74.5cm^3$ on CTV plan. On ICRU planning, the other one's $22.0cm^3$ CTV volume who residual tumor size excess 4cm is not including $100\%$ isodose. 8 patient's $12.9{\pm}5.9cm^3$ tumor volume who residual tumor size belows 4cm irradiated $100\%$ dose. Bladder dose(recommended by ICRU 38) is $90.1{\pm}21.3\%$ on ICRU plan, $68.7{\pm}26.6\%$ on CTV plan, and rectal dose is $86.4{\pm}18.3\%,\;76.9{\pm}15.6\%$. Bladder and Rectum maximum dose is $137.2{\pm}50.1\%,\;101.1{\pm}41.8\%$ on ICRU plan, $107.6{\pm}47.9\%,\;86.9{\pm}30.8\%$ on CTV plan. Therefore CTV plan more less normal issue-irradiated dose than ICRU plan. But one patient case who residual tumor size excess 4cm, Normal tissue dose more higher than critical dose remarkably on CTV plan. $80\%$over-Irradiated rectal dose(V80rec) is $1.8{\pm}2.4cm^3$ on ICRU plan, $0.7{\pm}1.0cm^3$ on CTV plan. $80\%$over-Irradiated bladder dose(V80bla) is $12.2{\pm}8.9cm^3$ on ICRU plan, $3.5{\pm}4.1cm^3$ on CTV plan. Likewise, CTV plan more less irradiated normal tissue than ICRU38 plan. Conclusion : Although, prove effect and stability about previous ICRU plan, if we use CTV plan by CT image, we will reduce normal tissue dose and irradiated goal-dose at residual tumor on small residual tumor case. But bigger residual tumor case, we need more research about effective 3D-planning.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Mid-term results of IntracardiacLateral Tunnel Fontan Procedure in the Treatment of Patients with a Functional Single Ventricle (기능적 단심실 환자에 대한 심장내 외측통로 폰탄술식의 중기 수술성적)

  • 이정렬;김용진;노준량
    • Journal of Chest Surgery
    • /
    • v.31 no.5
    • /
    • pp.472-480
    • /
    • 1998
  • We reviewed the surgical results of intracardiac lateral tunnel Fontan procedure for the repair of functional single ventricles. Between 1990 and 1996, 104 patients underwent total cavopulmonary anastomosis. Patients' age and body weight averaged 35.9(range 10 to 173) months and 12.8(range 6.5 to 37.8) kg. Preoperative diagnoses included 18 tricuspid atresias and 53 double inlet ventricles with univentricular atrioventricular connection and 33 other complex lesions. Previous palliative operations were performed in 50 of these patients, including 37 systemic to pulmonary artery shunts, 13 pulmonary artery bandings, 15 surgical atrial septectomies, 2 arterial switch procedures, 2 resections of subaortic conus, 2 repairs of total anomalous pulmonary venous connection and 1 Damus-Stansel-Kaye procedure. In 19 patients bidirectional cavopulmonary shunt operation was performed before the Fontan procedure and in 1 patient a Kawashima procedure was required. Preoperative hemodynamics revealed a mean pulmonary artery pressure of 14.6(range 5 to 28) mmHg, a mean pulmonary vascular resistance of 2.2(range 0.4 to 6.9) wood-unit, a mean pulmonary to systemic flow ratio of 0.9(range 0.3 to 3.0), a mean ventricular end-diastolic pressure of 9.0 (range 3.0 to 21.0) mmHg, and a mean arterial oxygen saturation of 76.0(range 45.6 to 88.0)%. The operative procedure consisted of a longitudinal right atriotomy 2cm lateral to the terminal crest up to the right atrial auricle, followed by the creation of a lateral tunnel connecting the orifices of either the superior caval vein or the right atrial auricle to the inferior caval vein, using a Gore-Tex vascular graft with or without a fenestration. Concomitant procedures at the time of Fontan procedure included 22 pulmonary artery angioplasties, 21 atrial septectomies, 4 atrioventricular valve replacements or repairs, 4 corrections of anomalous pulmonary venous connection, and 3 permanent pacemaker implantations. In 31, a fenestration was created, and in 1 an adjustable communication was made in the lateral tunnel pathway. One lateral tunnel conversion was performed in a patient with recurrent intractable tachyarrhythmia 4 years after the initial atriopulmonary connection. Post-extubation hemodynamic data revealed a mean pulmonary artery pressure of 12.7(range 8 to 21) mmHg, a mean ventricular end-diastolic pressure of 7.6(range 4 to 12) mmHg, and a mean room-air arterial oxygen saturation of 89.9(range 68 to 100) %. The follow-up duration was, on average, 27(range 1 to 85) months. Post-Fontan complications included 11 prolonged pleural effusions, 8 arrhythmias, 9 chylothoraces, 5 of damage to the central nervous system, 5 infectious complications, and 4 of acute renal failure. Seven early(6.7%) and 5 late(4.8%) deaths occured. These results proved that the lateral tunnel Fontan procedure provided excellent hemodynamic improvements with acceptable mortality and morbidity for hearts with various types of functional single ventricle.

  • PDF

Literature Analysis of Radiotherapy in Uterine Cervix Cancer for the Processing of the Patterns of Care Study in Korea (한국에서 자궁경부알 방사선치료의 Patterns of Care Study 진행을 위한 문헌 비교 연구)

  • Choi Doo Ho;Kim Eun Seog;Kim Yong Ho;Kim Jin Hee;Yang Dae Sik;Kang Seung Hee;Wu Hong Gyun;Kim Il Han
    • Radiation Oncology Journal
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2005
  • Purpose: Uterine cervix cancer is one of the most prevalent women cancer in Korea. We analysed published papers in Korea with comparing Patterns of Care Study (PCS) articles of United States and Japan for the purpose of developing and processing Korean PCS. Materials and Methods: We searched PCS related foreign-produced papers in the PCS homepage (212 articles and abstracts) and from the Pub Med to find Structure and Process of the PCS. To compare their study with Korean papers, we used the internet site 'Korean Pub Med' to search 99 articles regarding uterine cervix cancer and radiation therapy. We analysed Korean paper by comparing them with selected PCS papers regarding Structure, Process and Outcome and compared their items between the period of before 1980's and 1990's. Results: Evaluable papers were 28 from United States, 10 from the Japan and 73 from the Korea which treated cervix PCS items. PCS papers for United States and Japan commonly stratified into $3\~4$ categories on the bases of the scales characteristics of the facilities, numbers of the patients, doctors, Researchers restricted eligible patients strictly. For the process of the study, they analysed factors regarding pretreatment staging in chronological order, treatment related factors, factors in addition to FIGO staging and treatment machine. Papers in United States dealt with racial characteristics, socioeconomic characteristics of the patients, tumor size (6), and bilaterality of parametrial or pelvic side wail invasion (5), whereas papers from Japan treated of the tumor markers. The common trend in the process of staging work-up was decreased use of lymphangiogram, barium enema and increased use of CT and MRI over the times. The recent subject from the Korean papers dealt with concurrent chemoradiotherapy (9 papers), treatment duration (4), tumor markers (B) and unconventional fractionation. Conclusion: By comparing papers among 3 nations, we collected items for Korean uterine cervix cancer PCS. By consensus meeting and close communication, survey items for cervix cancer PCS were developed to measure structure, process and outcome of the radiation treatment of the cervix cancer. Subsequent future research will focus on the use of brachytherapy and its impact on outcome including complications. These finding and future PCS studies will direct the development of educational programs aimed at correcting identified deficits in care.