• Title/Summary/Keyword: Complement

Search Result 2,424, Processing Time 0.031 seconds

Usefulness of Gated RapidArc Radiation Therapy Patient evaluation and applied with the Amplitude mode (호흡 동조 체적 세기조절 회전 방사선치료의 유용성 평가와 진폭모드를 이용한 환자적용)

  • Kim, Sung Ki;Lim, Hhyun Sil;Kim, Wan Sun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.29-35
    • /
    • 2014
  • Purpose : This study has already started commercial Gated RapidArc automation equipment which was not previously in the Gated radiation therapy can be performed simultaneously with the VMAT Gated RapidArc radiation therapy to the accuracy of the analysis to evaluate the usability, Amplitude mode applied to the patient. Materials and Methods : The analysis of the distribution of radiation dose equivalent quality solid water phantom and GafChromic film was used Film QA film analysis program using the Gamma factor (3%, 3 mm). Three-dimensional dose distribution in order to check the accuracy of Matrixx dosimetry equipment and Compass was used for dose analysis program. Periodic breathing synchronized with solid phantom signals Phantom 4D Phantom and Varian RPM was created by breathing synchronized system, free breathing and breath holding at each of the dose distribution was analyzed. In order to apply to four patients from February 2013 to August 2013 with liver cancer targets enough to get a picture of 4DCT respiratory cycle and then patients are pratice to meet patient's breathing cycle phase mode using the patient eye goggles to see the pattern of the respiratory cycle to be able to follow exactly in a while 4DCT images were acquired. Gated RapidArc treatment Amplitude mode in order to create the breathing cycle breathing performed three times, and then at intervals of 40% to 60% 5-6 seconds and breathing exercises that can not stand (Fig. 5), 40% While they are treated 60% in the interval Beam On hold your breath when you press the button in a way that was treated with semi-automatic. Results : Non-respiratory and respiratory rotational intensity modulated radiation therapy technique absolute calculation dose of using computerized treatment plan were shown a difference of less than 1%, the difference between treatment technique was also less than 1%. Gamma (3%, 3 mm) and showed 99% agreement, each organ-specific dose difference were generally greater than 95% agreement. The rotational intensity modulated radiation therapy, respiratory synchronized to the respiratory cycle created Amplitude mode and the actual patient's breathing cycle could be seen that a good agreement. Conclusion : When you are treated Non-respiratory and respiratory method between volumetric intensity modulated radiation therapy rotation of the absolute dose and dose distribution showed a very good agreement. This breathing technique tuning volumetric intensity modulated radiation therapy using a rotary moving along the thoracic or abdominal breathing can be applied to the treatment of tumors is considered. The actual treatment of patients through the goggles of the respiratory cycle to create Amplitude mode Gated RapidArc treatment equipment that does not automatically apply to the results about 5-6 seconds stopped breathing in breathing synchronized rotary volumetric intensity modulated radiation therapy facilitate could see complement.

Study on Types and Counterplans of Medical Accident Experienced by Dentists in Seoul(2004) (서울특별시 개원 치과의사의 의료사고 및 분쟁의 유형과 대책에 관한 연구(2004년))

  • Yoon, Jeong-Ah;Kang, Jin-Kyu;Ahn, Hyoung-Joon;Choi, Jong-Hoon;Kim, Chong-Youl
    • Journal of Oral Medicine and Pain
    • /
    • v.30 no.2
    • /
    • pp.163-199
    • /
    • 2005
  • Dentistry had been considered to be a relatively safe zone from the risk of medical accidents for there are less number of emergency cases. However, in these days, the number of medical dispute is increasing that the dentists would not be able to overlook it as if it is none of their matters. Hence, researches on various medical accidents and analyses on related matters to seek proper management have been carried out recently, but the datas are not enough yet. This study analysed the actual conditions of medical accidents as well as disputes and the general awareness of dental practitioners in local clinics with the purpose of understanding the general situation and to suggest counterplan. The study was conducted by analysing 1,882 questionnaires collected from total of 3,684 dentists belonging to Seoul Dental Association and where Doctors and Hospitals Medical Malpractice Insurance for dentists is administered. The results were as follows: 1. 98.47% of the respondents doubted the risk of medical accident and dispute. 2. 27.42% of the respondents experienced medical dispute, and there was no significant difference between the rate of medical disputes and the resident training. 3. Among the cases of medical accidents, those related to the periodontal/operative treatment showed the highest rate of 20.50%, and that related to implant treatment was 6.17%. 4. 43.02% of the respondents explained about the treatment procedure before the treatment while 25.90% started the treatment without consent of the patients. 5. Medical dispute resulted from not having any explanation or consent of the patients were of 16.55%. 10.26% had difficulties in solving the problem for missing the medical records. 6. 49.73% responded to be capable of administering first aid treatment. Among them, 23.60% were equipped with accurate knowledge regarding the emergency care. 7. During medical dispute, 88.09% sought counsel from other dentists, and Local district dental association was found to be the most frequently asked group. 8. In cases of medical dispute, 5.26% of the respondents were asked to submit relevant data from customer protection organization, and among them, 75.61% acceded the demand sincerely. 9. After the settlement of the dispute, 83.63% recovered relatively stable state of mind. 10. 99.46% of the respondents felt the necessity of medical dispute management organization, and 78.58% responded that it was urgent. 11. 66.70% of the respondents joined Doctors and Hospitals Medical Malpractice Insurance, although they had not experienced medical dispute. However, 73.36% of the respondent were not aware of it, and 93.36% of the members were not aware of the procedure of the dispute settlement. 12. 79.0% of the respondents who joined the Doctors and Hospitals Medical Malpractice Insurance still felt confused when medical dispute occured, but relatively safer than before. 13. When medical dispute was settled through Doctors and Hospitals Medical Malpractice Insurance, 71.92% of the dentists were contented more than moderately, however, 35.16% of the patients were contented. 14. For complement of Doctors and Hospitals Medical Malpractice Insurance, 53.22% of the respondents felt that insurance company, dentist, and patient should all participate in bringing mutual agreement for quick settlement of the dispute. In addition, 29.08% of the respondents wanted insurance company to prevent patients from disturbing their practices. From the above results, improvement of the general awareness on increasing rate of medical disputes, and education as well as complementary measures for settlement of the disputes are required.

The Jurisdictional Precedent Analysis of Medical Dispute in Dental Field (치과임상영역에서 발생된 의료분쟁의 판례분석)

  • Kwon, Byung-Ki;Ahn, Hyoung-Joon;Kang, Jin-Kyu;Kim, Chong-Youl;Choi, Jong-Hoon
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.4
    • /
    • pp.283-296
    • /
    • 2006
  • Along with the development of scientific technologies, health care has been growing remarkably, and as the social life quality improves with increasing interest in health, the demand for medical service is rapidly increasing. However, medical accident and medical dispute also are rapidly increasing due to various factors such as, increasing sense of people's right, lack of understanding in the nature of medical practice, over expectation on medical technique, commercialize medical supply system, moral degeneracy and unawareness of medical jurisprudence by doctors, widespread trend of mutual distrust, and lack of systematized device for solution of medical dispute. This study analysed 30 cases of civil suit in the year between 1994 to 2004, which were selected among the medical dispute cases in dental field with the judgement collected from organizations related to dentistry and department of oral medicine, Yonsei university dental hospital. The following results were drawn from the analyses: 1. The distribution of year showed rapid increase of medical dispute after the year 2000. 2. In the types of medical dispute, suit associated with tooth extraction took 36.7% of all. 3. As for the cause of medical dispute, uncomfortable feeling and dissatisfaction with the treatment showed 36.7%, death and permanent damage showed 16.7% each. 4. Winning the suit, compulsory mediation and recommendation for settlement took 60.0% of judgement result for the plaintiff. 5. For the type of medical organization in relation to medical dispute, 60.0% was found to be the private dental clinics, and 30.0% was university dental hospitals. 6. For the level of trial, dispute that progressed above 2 or 3 trials was of 30.0%. 7. For the amount of claim for damage, the claim amounting between 50 million to 100 million won was of 36.7%, and that of more than 100 million won was 13.3%, and in case of the judgement amount, the amount ranging from 10 million to 30 million won was of 40.0%, and that of more than 100 million won was of 6.7%. 8. For the number of dentist involved in the suit, 26.7% was of 2 or more dentists. 9. For the amount of time spent until the judgement, 46.7% took 11 to 20 months, and 36.7% took 21 to 30 months. 10. For medical malpractice, 46.7% was judged to be guilty, and 70% of the cases had undergone medical judgement or verification of the case by specialists during the process of the suit. 11. In the lost cases of doctors(18 cases), 72.2% was due to violence of carefulness in practice and 16.7% was due to missing of explanation to patient. Medical disputes occurring in the field of dentistry are usually of relatively less risky cases. Hence, the importance of explanation to patient is emphasized, and since the levels of patient satisfaction are subjective, improvement of the relationship between the patient and the dentist and recovery of autonomy within the group dentist are essential in addition to the reduction of technical malpractice. Moreover, management measure against the medical dispute should be set up through complement of the current doctors and hospitals medical malpractice insurance which is being conducted irrationally, and establishment of system in which education as well as consultation for medical disputes lead by the group of dental clinicians and academic scholars are accessible.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Radiation Therapy Using M3 Wax Bolus in Patients with Malignant Scalp Tumors (악성 두피 종양(Scalp) 환자의 M3 Wax Bolus를 이용한 방사선치료)

  • Kwon, Da Eun;Hwang, Ji Hye;Park, In Seo;Yang, Jun Cheol;Kim, Su Jin;You, Ah Young;Won, Young Jinn;Kwon, Kyung Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.75-81
    • /
    • 2019
  • Purpose: Helmet type bolus for 3D printer is being manufactured because of the disadvantages of Bolus materials when photon beam is used for the treatment of scalp malignancy. However, PLA, which is a used material, has a higher density than a tissue equivalent material and inconveniences occur when the patient wears PLA. In this study, we try to treat malignant scalp tumors by using M3 wax helmet with 3D printer. Methods and materials: For the modeling of the helmet type M3 wax, the head phantom was photographed by CT, which was acquired with a DICOM file. The part for helmet on the scalp was made with Helmet contour. The M3 Wax helmet was made by dissolving paraffin wax, mixing magnesium oxide and calcium carbonate, solidifying it in a PLA 3D helmet, and then eliminated PLA 3D Helmet of the surface. The treatment plan was based on Intensity-Modulated Radiation Therapy (IMRT) of 10 Portals, and the therapeutic dose was 200 cGy, using Analytical Anisotropic Algorithm (AAA) of Eclipse. Then, the dose was verified by using EBT3 film and Mosfet (Metal Oxide Semiconductor Field Effect Transistor: USA), and the IMRT plan was measured 3 times in 3 parts by reproducing the phantom of the head human model under the same condition with the CT simulation room. Results: The Hounsfield unit (HU) of the bolus measured by CT was $52{\pm}37.1$. The dose of TPS was 186.6 cGy, 193.2 cGy and 190.6 cGy at the M3 Wax bolus measurement points of A, B and C, and the dose measured three times at Mostet was $179.66{\pm}2.62cGy$, $184.33{\pm}1.24cGy$ and $195.33{\pm}1.69cGy$. And the error rates were -3.71 %, -4.59 %, and 2.48 %. The dose measured with EBT3 film was $182.00{\pm}1.63cGy$, $193.66{\pm}2.05cGy$ and $196{\pm}2.16cGy$. The error rates were -2.46 %, 0.23 % and 2.83 %. Conclusions: The thickness of the M3 wax bolus was 2 cm, which could help the treatment plan to be established by easily lowering the dose of the brain part. The maximum error rate of the scalp surface dose was measured within 5 % and generally within 3 %, even in the A, B, C measurements of dosimeters of EBT3 film and Mosfet in the treatment dose verification. The making period of M3 wax bolus is shorter, cheaper than that of 3D printer, can be reused and is very useful for the treatment of scalp malignancies as human tissue equivalent material. Therefore, we think that the use of casting type M3 wax bolus, which will complement the making period and cost of high capacity Bolus and Compensator in 3D printer, will increase later.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF