• Title/Summary/Keyword: application method

Search Result 22,754, Processing Time 0.061 seconds

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Studies on the Varietal Difference in the Physiology of Ripening in Rice with Special Reference to Raising the Percentage of Ripened Grains (수도 등숙의 품종간차이와 그 향상에 관한 연구)

  • Su-Bong Ahn
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.14
    • /
    • pp.1-40
    • /
    • 1973
  • There is a general tendency to increase nitrogen level in rice production to insure an increased yield. On the other hand, percentage of ripened grains is getting decreased with such an increased fertilizer level. Decreasing of the percentage is one of the important yield limiting factors. Especially the newly developed rice variety, 'Tongil' is characterized by a relatively low percentage of ripened grains as compared with the other leading varieties. Therefore, these studies were aimed to finding out of some measures for the improvement of ripening in rice. The studies had been carried out in the field and in the phytotron during the period of three years from 1970 to 1972 at the Crop Experiment Station in Suwon. The results obtained from the experiments could be summarized as follows: 1. The spikelet of Tongil was longer in length, more narrow in width, thinner in thickness, smaller in the volume of grains and lighter in grain weight than those of Jinheung. The specific gravity of grain was closely correlated with grain weight and the relationship with thickness, width and length was getting smaller in Jinheung. On the other hand, Tongil showed a different pattern from Jinheung. The relationship of the specific gravity with grain weight was the greatest and followed by that with the width, thickness and length, in order. 2. The distribution of grain weight selected by specific gravity was different from one variety to another. Most of grains of Jinheung were distributed over the specific gravity of 1.12 with its peak at 1.18, but many of grains of Tongil were distributed below 1.12 with its peak at 1.16. The brown/rough rice ratio was sharply declined below the specific gravity of 1.06 in Jinheung, but that of Tongil was not declined from the 1.20 to the 0.96. Accordingly, it seemed to be unfair to make the specific gravity criterion for ripened grains at 1.06 in the Tongil variety. 3. The increasing tendency of grain weight after flowering was different depending on varieties. Generally speaking, rice varieties originated from cold area showed a slow grain weight increase while Tongil was rapid except at lower temperature in late ripening stage. 4. In the late-tillered culms or weak culms, the number of spikelets was small and the percentage of ripened grains was low. Tongil produced more late-tillered culms and had a longer flowering duration especially at lower temperature, resulting in a lower percentage of ripened grains. 5. The leaf blade of Tongil was short, broad and errect, having light receiving status for photosynthesis was better. The photosynthetic activity of Tongil per unit leaf area was higher than that of Jinheung at higher temperature, but lower at lower temperature. 6. Tongil was highly resistant to lodging because of short culm length, and thick lower-internodes. Before flowering, Tongil had a relatively higher amount of sugars, phosphate, silicate, calcium, manganese and magnesium. 7. The number of spikelets of Tongil was much more than that of Jinheung. The negative correlation was observed between the number of spikelets and percentage of ripened grains in Jinheung, but no correlation was found in Tongil grown at higher temperature. Therefore, grain yield was increased with increased number of spikelets in Tongil. Anthesis was not occurred below 21$^{\circ}C$ in Tongil, so sterile spikelets were increased at lower temperature during flowering stage. 8. The root distribution of Jinheung was deeper than that of Tongil. The root activity of Tongil evaluated by $\alpha$-naphthylamine oxidation method, was higher than that of Jinheung at higher temperature, but lower at lower temperature. It is seemed to be related with discoloration of leaf blades. 9. Tongil had a better light receiving status for photosynthesis and a better productive structure with balance between photosynthesis and respiration, so it is seemed that tongil has more ideal plant type for getting of a higher grain yield as compared with Jinheung. 10. Solar radiation during the 10 days before to 30 days after flowering seemed enough for ripening in suwon, but the air temperature dropped down below 22$^{\circ}C$ beyond August 25. Therefore, it was believed that air temperature is one of ripening limiting factors in this case. 11. The optimum temperature for ripening in Jinheung was relatively lower than that of Tongil requriing more than $25^{\circ}C$. Air temperature below 21$^{\circ}C$ was one of limiting factors for ripening in Tongil. 12. It seemed that Jinheung has relatively high photosensitivity and moderate thermosensitivity, while Tongil has a low photosensitivity, high thermosensitivity and longer basic vegetative phase. 13. Under a condition of higher nitrogen application at late growing stage, the grain yield of Jinheung was increased with improvement of percentage of ripened grains, while grain yield of Tongil decreased due to decreasing the number of spikelets although photosynthetic activity after flowering was. increased. 14. The grain yield of Jinheung was decreased slightly in the late transplanting culture since its photosynthetic activity was relatively high at lower temperature, but that of Tonil was decreased due to its inactive photosynthetic activity at lower temperature. The highest yield of Tongil was obtained in the early transplanting culture. 15. Tongil was adapted to a higher fertilizer and dense transplanting, and the percentage of ripened grains was improved by shortening of the flowering duration with increased number of seedlings per hill. 16. The percentage of vigorous tillers was increased with a denser transplanting and increasing in number of seedlings per hill. 17. The possibility to improve percentage of ripened grains was shown with phosphate application at lower temperature. The above mentioned results are again summarized below. The Japonica type leading varieties should be flowered before August 20 to insure a satisfactory ripening of grains. Nitrogen applied should not be more than 7.5kg/10a as the basal-dressing and the remained nitrogen should be applied at the later growing stage to increase their photosynthetic activity. The morphological and physiological characteristics of Tongil, a semi-dwarf, Indica $\times$ Japonica hybrid variety, are very different from those of other leading rice varieties, requring changes in seed selection by specific gravity method, in milling and in the cultural practices. Considering the peculiar distribution of grains selected by the method and the brown/rough rice ratio, the specific gravity criterion for seed selection should be changed from the currently employed 1.06 to about 0.96 for Tongil. In milling process, it would be advisable to bear in mind the specific traits of Tongil grain appearance. Tongil is a variety with many weak tillers and under lower temperature condition flowering is delayed. Such characteristics result in inactivation of roots and leaf blades which affects substantially lowering of the percentage of ripened grains due to increased unfertilized spikelets. In addition, Tongil is adapted well to higher nitrogen application. Therefore, it would be recommended to transplant Tongil variety earlier in season under the condition of higer nitrogen, phosphate and silicate. A dense planting-space with three vigorous seedlings per hill should be practiced in this case. In order to manifest fully the capability of Tongil, several aspects such as the varietal improvement, culural practices and milling process should be more intensively considered in the future.he future.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Implementation of a HACCP System through u-HACCP Application and the Verification of Microbial Quality Improvement in a Small Size Restaurant (소규모 외식업체용 IP-USN을 활용한 HACCP 시스템 적용 및 유효성 검증)

  • Lim, Tae-Hyeon;Choi, Jung-Hwa;Kang, Young-Jae;Kwak, Tong-Kyung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.3
    • /
    • pp.464-477
    • /
    • 2013
  • There is a great need to develop a training program proven to change behavior and improve knowledge. The purpose of this study was to evaluate employee hygiene knowledge, hygiene practice, and cleanliness, before and after HACCP system implementation at one small-size restaurant. The efficiency of the system was analyzed using time-temperature control after implementation of u-HACCP$^{(R)}$. The employee hygiene knowledge and practices showed a significant improvement (p<0.05) after HACCP system implementation. In non-heating processes, such as seasoned lettuce, controlling the sanitation of the cooking facility and the chlorination of raw ingredients were identified as the significant CCP. Sanitizing was an important CCP because total bacteria were reduced 2~4 log CFU/g after implementation of HACCP. In bean sprouts, microbial levels decreased from 4.20 logCFU/g to 3.26 logCFU/g. There were significant correlations between hygiene knowledge, practice, and microbiological contamination. First, personnel hygiene had a significant correlation with 'total food hygiene knowledge' scores (p<0.05). Second, total food hygiene practice scores had a significant correlation (p<0.05) with improved microbiological qualities of lettuce salad. Third, concerning the assessment of microbiological quality after 1 month, there were significant (p<0.05) improvements in times of heating, and the washing and division process. On the other hand, after 2 months, microbiological was maintained, although only two categories (division process and kitchen floor) were improved. This study also investigated time-temperature control by using ubiquitous sensor networks (USN) consisting of an ubi reader (CCP thermometer), an ubi manager (tablet PC), and application software (HACCP monitoring system). The result of the temperature control before and after USN showed better thermal management (accuracy, efficiency, consistency of time control). Based on the results, strict time-temperature control could be an effective method to prevent foodborne illness.

Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

  • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.115-118
    • /
    • 2012
  • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

  • PDF

A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

  • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.931-946
    • /
    • 2017
  • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.

Blood Pressure Reactivity during Nasal Continuous Positive Airway Pressure in Obstructive Sleep Apnea Syndrome (폐쇄성(閉鎖性) 수면무호흡증(睡眠無呼吸症)에서 지속적(持續的) 상기도(上氣道) 양압술(陽壓術)이 혈력학적(血力學的) 변화(變化)에 끼치는 영향(影響))

  • Park, Doo-Heum;Jeong, Do-Un
    • Sleep Medicine and Psychophysiology
    • /
    • v.9 no.1
    • /
    • pp.24-33
    • /
    • 2002
  • Objectives: Nasal continuous positive airway pressure (CPAP) corrected elevated blood pressure (BP) in some studies of obstructive sleep apnea syndrome (OSAS) but not in others. Such inconsistent results in previous studies might be due to differences in factors influencing the effects of CPAP on BP. The factors referred to include BP monitoring techniques, the characteristics of subjects, and method of CPAP application. Therefore, we evaluated the effects of one night CPAP application on BP and heart rate (HR) reactivity using non-invasive beat-to-beat BP measurement in normotensive and hypertensive subjects with OSAS. Methods: Finger arterial BP and oxygen saturation monitoring with nocturnal polysomnography were performed on 10 OSAS patients (mean age $52.2{\pm}12.4\;years$; 9 males, 1 female; respiratory disturbance index (RDI)>5) for one baseline night and another CPAP night. Beat-to-beat measurement of BP and HR was done with finger arterial BP monitor ($Finapres^{(R)}$) and mean arterial oxygen saturation ($SaO_2$) was also measured at 2-second intervals for both nights. We compared the mean values of cardiovascular and respiratory variables between baseline and CPAP nights using Wilcoxon signed ranks test. Delta ($\Delta$) BP, defined as the subtracted value of CPAP night BP from baseline night BP, was correlated with age, body mass index (BMI), baseline night values of BP, BP variability, HR, HR variability, mean $SaO_2$ and respiratory disturbance index (RDI), and CPAP night values of TWT% (total wake time%) and CPAP pressure, using Spearman's correlation. Results: 1) Although increase of mean $SaO_2$ (p<.01) and decrease of RDI (p<.01) were observed on the CPAP night, there were no significant differences in other variables between two nights. 2) However, delta BP tended to increase or decease depending on BP values of the baseline night and age. Delta systolic BP and baseline systolic BP showed a significant positive correlation (p<.01), but delta diastolic BP and baseline diastolic BP did not show a significant correlation except for a positive correlation in wake stage (p<.01). Delta diastolic BP and age showed a significant negative correlation (p<.05) during all stages except for REM stage, but delta systolic BP and age did not. 3) Delta systolic and diastolic BPs did not significantly correlate with other factors, such as BMI, baseline night values of BP variability, HR, HR variability, mean SaO2 and RDI, and CPAP night values of TWT% and CPAP pressure, except for a positive correlation of delta diastolic pressure and TWT% of CPAP night (p<.01). Conclusions: We observed that systolic BP and diastolic BP tended to decrease, increase or remain still in accordance with the systolic BP level of baseline night and aging. We suggest that BP reactivity by CPAP be dealt with as a complex phenomenon rather than a simple undifferentiated BP decrease.

  • PDF

Visual Media Education in Visual Arts Education (미술교육에 있어서 시각적 미디어를 통한 조형교육에 관한 연구)

  • Park Ji-Sook
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.64-104
    • /
    • 2005
  • Visual media transmits image and information reproduced in large quantities, such as a photography, film, television, video, advertisement, or computer image. Correspondence to the students' reception and recognition of culture in the future. arrangements for the field of studies of visual culture. 'Visual Culture' implies cultural phenomena of visual images via visual media, which includes not only the categories of traditional arts like a painting, sculpture, print, or design, but the performance arts including a fashion show or parade of carnival, and the mass and electronic media like a photography, film, television, video, advertisement, cartoon, animation, or computer image. In the world of visual media, Image' functions as an essential medium of communication. Therefore, people call the culture of today fra of Image Culture', which has been converted from an alphabet convergence era to an image convergence one. Image, via visual media, has become a dominant means for communication in large part of human life, so we can designate an Image' as a typical aspect of visual culture today. Image, as an essential medium of communication, plays an important role in contemporary society. The one way is the conversion of analogue image like an actual picture, photograph, or film into digital one through the digitalization of digital camera or scanner as 'an analogue/digital commutator'. The other is a way of process with a computer drawing, or modeling of objects. It is appropriate to the production of pictorial and surreal images. Digital images, produced by the other, can be divided into the form of Pixel' and form of Vector'. Vector is a line linking the point of departure to the point of end, which organizes informations. Computer stores each line's standard location and correlative locations to one another Digital image shows for more 'Perfectness' than any other visual media. Digital image has been evolving in the diverse aspects, such as a production of geometrical or organic image compositing, interactive art, multimedia art, or web art, which has been applied a computer as an extended trot of painting. Someone often interprets digitalized copy with endless reproduction of original even as an extension of a print. Visual af is no longer a simple activity of representation by a painter or sculptor, but now is intimately associated with a matter of application of media. There is some problem in images via visual media. First, the image via media doesn't reflect a reality as it is, but reflects an artificial manipulated world, that is, a virtual reality. Second, the introduction of digital effect and the development of image processing technology have enhanced a spectacle of destructive and violent scenes. Third, a child intends to recognize the interactive images of computer game and virtual reality as a reality, or truth. Education needs not only to point out an ill effect of mass media and prevent the younger generation from being damaged by it, but also to offer a knowledge and know-how to cope actively with social, cultural circumstances. Visual media education is one of these essential methods for the contemporary and future human being in the overflowing of image informations. The fosterage of 'Visual Literacy' can be considered as a very purpose of visual media education. This is a way to lead an individual to the discerning, active consumer and producer of visual media in life as far as possible. The elements of 'Visual Literacy' can be divided into a faculty of recognition related to the visual media, a faculty of critical reception, a faculty of appropriate application, a faculty of active work and a faculty of creative modeling, which are promoted at the same time by the education of 'visual literacy'. In conclusion, the education of 'Visual Literacy' guides students to comprehend and discriminate the visual image media carefully, or receive them critically, apply them properly, or produce them creatively and voluntarily. Moreover, it leads to an artistic activity by means of new media. This education can be approached and enhanced by the connection and integration with real life. Visual arts and education of them play an important role in the digital era depended on visual communications via image information. Visual me야a of day functions as an essential element both in daily life and in arts. Students can soundly understand visual phenomena of today by means of visual media, and apply it as an expression tool of life culture as well. A new recognition and valuation visual image and media education is required to cultivate the capability of active, upright dealing with the changes of history of civilization. 1) Visual media education helps to cultivate a sensibility for images, which reacts to and deals with the circumstances. 2) It helps students to comprehend the contemporary arts and culture via new media. 3) It supplies a chance of students' experiencing a visual modeling by means of new media. 4) There are educational opportunities of images with temporality and spaciality, and therefore a discerning person becomes to increase. 5) The modeling activity via new media leads students to be continuously interested in the school and production of plastic arts. 6) It raises the ability of visual communications dealing with image information society. 7) An education of digital image is significant in respect of cultivation of man of talent for the future society of image information as well. To correspond to the changing and developing social, cultural circumstances, and the form and recognition of students' reception of them, visual arts education must arrange the field of studying on a new visual culture. Besides, a program needs to be developed, which is in more systematic and active level in relation to visual media education. Educational contents should be extended to the media for visual images, that is, photography, film, television, video, computer graphic, animation, music video, computer game and multimedia. Every media must be separately approached, because they maintain the modes and peculiarities of their own according to the conveyance form of message. The concrete and systematic method of teaching and the quality of education must be researched and developed, centering around the development of a course of study. Teacher's foundational capability of teaching should be cultivated for the visual media education. In this case, it must be paid attention to the fact that a technological level of media is considered as a secondary. Because school education doesn't intend to train expert and skillful producers, but intends to lay stress on the essential aesthetic one with visual media under the social and cultural context, in respect of a consumer including a man of culture.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.