• Title/Summary/Keyword: 김기태

Search Result 628, Processing Time 0.026 seconds

Comparison on the Extraction Efficiency and Antioxidant Activity of Flavonoid from Citrus Peel by Different Extraction Methods (추출방법에 따른 감귤 과피 유래 Flavonoid의 추출효율 및 항산화 효과에 대한 비교)

  • Cheigh, Chan-Ick;Jung, Won-Guen;Chung, Eun-Young;Ko, Min-Jung;Cho, Sang-Woo;Lee, Jae-Hwan;Chang, Pahn-Shick;Park, Young-Seo;Paik, Hyun-Dong;Kim, Kee-Tae;Chung, Myong-Soo
    • Food Engineering Progress
    • /
    • v.14 no.2
    • /
    • pp.166-172
    • /
    • 2010
  • The extraction of polyphenol and flavonoid from citrus peel was performed by the ethanol, sugar, hot water (80$^{\circ}C$), and subcritical water extraction methods. The maximum yields of total polyphenolic compounds (27.25${\pm}$1.33 mg QE/g DCP, QE and DCP indicate quercetin equivalent and dried citrus peel, respectively) and flavonoids (7.31${\pm}$0.41 mg QE/g DCP) were obtained by subcritical water extraction (SWE) with operating conditions of 190$^{\circ}C$, 1300 psi, and 10 min. The yields by SWE were over 7.2-, and 8.5-fold higher than those of total polyphenols (3.79${\pm}$0.73 mg QE/g DCP) and flavonoids (0.86${\pm}$0.27 mg QE/g DCP) obtained using the ethanol extraction, which showed the highest extraction efficiency among tested conventional methods, respectively. Antioxidant activities of extracts obtained by different methods showed no significant differences. However, the relative antioxidant yield per 1 g dried citrus peel by SWE (190$^{\circ}C$, 10 min) was over 9.5-fold higher than that by the ethanol extraction.

Effect of Subcritical Water for the Enhanced Extraction Efficiency of Polyphenols and Flavonoids from Black Rice Bran (흑미강으로부터 유용 폴리페놀 및 플라보노이드의 추출효율 증진을 위한 아임계수의 효과)

  • Cheigh, Chan-Ick;Chung, Eun-Young;Ko, Min-Jung;Cho, Sang-Woo;Chang, Pahn-Shick;Park, Young-Seo;Lee, Kyoung-Ah;Paik, Hyun-Dong;Kim, Kee-Tae;Hong, Seok-In;Chung, Myong-Soo
    • Food Engineering Progress
    • /
    • v.14 no.4
    • /
    • pp.335-341
    • /
    • 2010
  • The extraction of polyphenol and flavonoid from black rice bran was performed by diverse extraction methods using the sugar solution, ethanol, hot water ($80^{\circ}C$), and by subcritical water extraction (SWE) method. By SWE under operating conditions of $190^{\circ}C$, 1,300 psi, and 10 min, the maximum yields of total polyphenolic compounds (35.06${\pm}$1.28 mg quercetin equivalent (QE)/g dried material and flavonoids (7.08${\pm}$0.31 mg QE/g dried material) could be obtained. These results were over 11.77- and 12.21-fold higher than those of the ethanol extraction, which showed the highest extraction efficiency among tested conventional methods in total polyphenols (2.98${\pm}$0.74 mg QE/g dried material) and flavonoids (0.58${\pm}$0.21 mg QE/g dried material), respectively. Though the highest antioxidant activity (87.14${\pm}$1.14%) was observed at the dried extract obtained from ethanol method, the relative antioxidant activity per 1 g dried black rice bran by SWE ($190^{\circ}C$, 10 min) was over 11.53-fold higher than that by the ethanol extraction.

Analysis of Rice Blast Outbreaks in Korea through Text Mining (텍스트 마이닝을 통한 우리나라의 벼 도열병 발생 개황 분석)

  • Song, Sungmin;Chung, Hyunjung;Kim, Kwang-Hyung;Kim, Ki-Tae
    • Research in Plant Disease
    • /
    • v.28 no.3
    • /
    • pp.113-121
    • /
    • 2022
  • Rice blast is a major plant disease that occurs worldwide and significantly reduces rice yields. Rice blast disease occurs periodically in Korea, causing significant socio-economic damage due to the unique status of rice as a major staple crop. A disease outbreak prediction system is required for preventing rice blast disease. Epidemiological investigations of disease outbreaks can aid in decision-making for plant disease management. Currently, plant disease prediction and epidemiological investigations are mainly based on quantitatively measurable, structured data such as crop growth and damage, weather, and other environmental factors. On the other hand, text data related to the occurrence of plant diseases are accumulated along with the structured data. However, epidemiological investigations using these unstructured data have not been conducted. The useful information extracted using unstructured data can be used for more effective plant disease management. This study analyzed news articles related to the rice blast disease through text mining to investigate the years and provinces where rice blast disease occurred most in Korea. Moreover, the average temperature, total precipitation, sunshine hours, and supplied rice varieties in the regions were also analyzed. Through these data, it was estimated that the primary causes of the nationwide outbreak in 2020 and the major outbreak in Jeonbuk region in 2021 were meteorological factors. These results obtained through text mining can be combined with deep learning technology to be used as a tool to investigate the epidemiology of rice blast disease in the future.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

The Clinical Significance of Follow Up SCC Levels in Patients with Recurrent Squamous Cell Carcinoma of the Cervix (재발성 자궁경부 편평상피암 환자들에서 Squamous Cell Carcinoma 항원의 유용성)

  • Choi Young Min;Park Sung Kwang;Cho Heung Lae;Lee Kyoung Bok;Kim Ki Tae;Kim Juree;Sohn Seung Chang
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.353-358
    • /
    • 2002
  • Purpose : To investigate the clinical usefulness of a follow-up examination using serum squamous cell carcinoma antigen (SCC) for the early detection of recurrence in patients treated for conical squamous cell carcinoma. Materials and Methods : 20 patients who were treated for recurrent cervical squamous cell carcinoma between 1997 and 1998, who had experienced a complete remission after radiotherapy and who underwent an SCC test around the time when recurrence was detected, were included in this study. The levels of SCC were measured from the serum of the patients by immunoassay and values less than 2 ng/mL were regarded as normal. The sensitivity of the SCC test for use in the detection of recurrence, the association between the SCC values and the recurrence patterns and the tumor size and stage, and the temporal relation between the SCC increment and recurrence detection were evaluated. Results : The SCC values were above normal in 17 out of 20 patients, so the sensitivity of the SCC test for the detection of recurrence was $85\%$, and the mean and median of the SCC values were 15.2 and 9.5 ng/mL, respectively. No differences were observed in the SCC values according to the recurrence sites. For 11 patients, the SCC values were measured over a period of 6 months before recurrence was detected, and the mean and median values were 13.6 and 3.6 ng/mL, respectively. The SCC values of 7 patients were higher than the normal range, and the SCC values of the other 4 patients were normal but 3 among them were above 1.5 ng/mL. At the time of diagnosis, the SCC valuess were measured for 16 of the 20 recurrent patients, and the SCC values of the patients with a bulky tumor $(\geq4\;cm)$ or who were in stage IIb or III were higher than those of the patients with a non-bulky tumor or who were in stage Ib or IIa. Conclusion : The SCC test is thought to be useful for the early detection of recurrence during the follow up period in patients treated for cervical squamous cell carcinoma. When an effective salvage treatment is developed in the future, the benefit of this follow-up SCC test will be increased.

Effect of Bronchial Artery Embolization(BAE) in Management of Massive Hemoptysis (대량 객혈환자에서 기관지 동맥색전술의 효과)

  • Yeo, Dong-Seung;Lee, Suk-Young;Hyun, Dae-Seong;Lee, Sang-Haak;Kim, Seok-Chan;Choi, Young-Mee;Suh, Ji-Won;Ahn, Joong-Hyun;Song, So-Hyang;Kim, Chi-Hong;Moon, Hwa-Sik;Song, Jeong-Sup;Park, Sung-Hak;Kim, Ki-Tae
    • Tuberculosis and Respiratory Diseases
    • /
    • v.46 no.1
    • /
    • pp.53-64
    • /
    • 1999
  • Background : Massive and untreated hemoptysis is associated with a mortality of greater than 50 percent. Since the bleeding is from a bronchial arterial source in the vast majority of patients, embolization of the bronchial arteries(BAE) has become an accepted treatment in the management of massive hemoptysis because it achieves immediate control of bleeding in 75 to 90 percent of the patients. Methods: Between 1990 and 1996, we treated 146 patients with hemoptysis by bronchial artery embolization. Catheters(4, 5, or 7F) and gelfoam, ivalon, and/or microcoil were used for embolization. Results: Pulmonary tuberculosis and related disorders were the most common underlying disease of hemoptysis(72.6%). Immediate success rate to control bleeding within 24hours was 95%, and recurrence rate was 24.7%. The recurrence rate occured within 6 months after embolization was 63.9%. Initial angiographic findings such as bilaterality, systemic-pulmonary artery shunt, neovascularity, aneurysm were not statistically correlated with rebleeding tendency(P>0.05). Among Initial radiographic findings, only pleural lesions were significantly correlated with rebleeding tendency(P<0.05). At additional bronchial artery angiograpy done due to rebleeding, recanalization of previous embolized arteries were 63.9%, and the presence of new feeding arteries were 16.7%, and 19.4% of patients with rebleeding showed both The complications such as fever, chest pain, headache, nausea and vomiting, arrhythmia, paralylytic ileus, transient sensory loss (lower extremities), hypotension, urination difficulty were noticed at 40 patients(27.4%). Conclusion: We conclude that bronchial artery embolization is relatively safe method achieving immediate control of massive hemoptysis. At initial angiographic findings, we could not find any predictive factors for subsequent rebleeding. It may warrant further study whether patients with pleural disease have definetely increased rebleeding tendency.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.