• Title/Summary/Keyword: Layer Performance

Search Result 5,367, Processing Time 0.039 seconds

A Study on the Water Quality Improvement in Semi-closed Sea Area Using Solar Powered Circulators (태양광 물 순환장치 가동에 의한 반폐쇄성 수역의 수질 변화)

  • Kim, Deok-Gil;Lee, Eun-Kyeong;Kim, Mu-Chan;Song, Sung-Kyu;Cho, Kwang-Soo
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.15 no.3
    • /
    • pp.257-262
    • /
    • 2012
  • This study was conducted to verify the performance of the solar water circulation apparatus that was installed in a semi-closed sea area of Tongyeong to improve the water quality through removing thermocline and oxygen depleted water mass, and to prevent the occurrence of red tides caused by eutrophication. From 8 weeks of experiments, we found that the thermocline in the semi-closed sea area has been removed gradually after installation of the apparatus. The initial temperature of surface and bottom was $27.9^{\circ}C$ and $23.8^{\circ}C$, respectively and it was changed to $22.1^{\circ}C$ in both depth. In case of DO concentration, there was a big gap between surface (5.49 mg/L) and bottom (2.61 mg/L) and was an oxygen depleted water mass in the bottom area at initial. However DO concentration in bottom layer has increased gradually after operation (6.19 mg/L) and the oxygen depleted water mass has removed. Due to the effects of seasonal variation and the operation of the solar water circulation apparatus for 8 weeks, COD concentration decreased from 5.61 mg/L to 2.36 mg/L in surface area, and from 6.08 mg/L to 1.73 mg/L in bottom area. Dissolved inorganic nitrogen concentration also decreased from 0.135 mg/L to 0.050 mg/L in surface area, and from 0.076 mg/L to 0.051 mg/L in bottom area. This research was conducted from July to September, and it might be possible that the variation of water quality was affected by both seasonal variation and the operation of the water circulation apparatus. Hence a further research is required to verify the performance of the water circulation apparatus itself and to monitor dissolved inorganic nitrogen and phosphorous concentrations as well as Chl-a.

Fabrication and Performance of Anode-Supported Flat Tubular Solid Oxide Fuel Cell Unit Bundle (연료극 지지체식 평관형 고체산화물 연료전지 단위 번들의 제조 및 성능)

  • Lim, Tak-Hyoung;Kim, Gwan-Yeong;Park, Jae-Layng;Lee, Seung-Bok;Shin, Dong-Ryul;Song, Rak-Hyun
    • Journal of the Korean Electrochemical Society
    • /
    • v.10 no.4
    • /
    • pp.283-287
    • /
    • 2007
  • KIER has been developing the anode-supported flat tubular solid oxide fuel cell unit bundle for the intermediate temperature($700{\sim}800^{\circ}C$) operation. Anode-supported flat tubular cells have Ni/YSZ cermet anode support, 8 moi.% $Y_2O_3$ stabilized $ZrO_2(YSZ)$ thin electrolyte, and cathode multi-layer composed of Sr-doped $LaSrMnO_3(LSM)$, LSM-YSZ composite, and $LaSrCoFeO_3(LSCF)$. The prepared anode-supported flat tubular cell was joined with ferritic stainless steel cap by induction brazing process. Current collection for the cathode was achieved by winding Ag wire and $La_{0.6}Sr_{0.4}CoO_3(LSCo)$ paste, while current collection for the anode was achieved by using Ni wire and felt. For making stack, the prepared anode-supported flat tubular cells with effective electrode area of $90\;cm^2$ connected in series with 12 unit bundles, in which unit bundle consists of two cells connected in parallel. The performance of unit bundle in 3% humidified $H_2$ and air at $800^{\circ}C$ shows maximum power density of $0.39\;W/cm^2$ (@ 0.7V). Through these experiments, we obtained basic technology of the anode-supported flat tubular cell and established the proprietary concept of the anode-supported flat tubular cell unit bundle.

Mapping Categories of Heterogeneous Sources Using Text Analytics (텍스트 분석을 통한 이종 매체 카테고리 다중 매핑 방법론)

  • Kim, Dasom;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.193-215
    • /
    • 2016
  • In recent years, the proliferation of diverse social networking services has led users to use many mediums simultaneously depending on their individual purpose and taste. Besides, while collecting information about particular themes, they usually employ various mediums such as social networking services, Internet news, and blogs. However, in terms of management, each document circulated through diverse mediums is placed in different categories on the basis of each source's policy and standards, hindering any attempt to conduct research on a specific category across different kinds of sources. For example, documents containing content on "Application for a foreign travel" can be classified into "Information Technology," "Travel," or "Life and Culture" according to the peculiar standard of each source. Likewise, with different viewpoints of definition and levels of specification for each source, similar categories can be named and structured differently in accordance with each source. To overcome these limitations, this study proposes a plan for conducting category mapping between different sources with various mediums while maintaining the existing category system of the medium as it is. Specifically, by re-classifying individual documents from the viewpoint of diverse sources and storing the result of such a classification as extra attributes, this study proposes a logical layer by which users can search for a specific document from multiple heterogeneous sources with different category names as if they belong to the same source. Besides, by collecting 6,000 articles of news from two Internet news portals, experiments were conducted to compare accuracy among sources, supervised learning and semi-supervised learning, and homogeneous and heterogeneous learning data. It is particularly interesting that in some categories, classifying accuracy of semi-supervised learning using heterogeneous learning data proved to be higher than that of supervised learning and semi-supervised learning, which used homogeneous learning data. This study has the following significances. First, it proposes a logical plan for establishing a system to integrate and manage all the heterogeneous mediums in different classifying systems while maintaining the existing physical classifying system as it is. This study's results particularly exhibit very different classifying accuracies in accordance with the heterogeneity of learning data; this is expected to spur further studies for enhancing the performance of the proposed methodology through the analysis of characteristics by category. In addition, with an increasing demand for search, collection, and analysis of documents from diverse mediums, the scope of the Internet search is not restricted to one medium. However, since each medium has a different categorical structure and name, it is actually very difficult to search for a specific category insofar as encompassing heterogeneous mediums. The proposed methodology is also significant for presenting a plan that enquires into all the documents regarding the standards of the relevant sites' categorical classification when the users select the desired site, while maintaining the existing site's characteristics and structure as it is. This study's proposed methodology needs to be further complemented in the following aspects. First, though only an indirect comparison and evaluation was made on the performance of this proposed methodology, future studies would need to conduct more direct tests on its accuracy. That is, after re-classifying documents of the object source on the basis of the categorical system of the existing source, the extent to which the classification was accurate needs to be verified through evaluation by actual users. In addition, the accuracy in classification needs to be increased by making the methodology more sophisticated. Furthermore, an understanding is required that the characteristics of some categories that showed a rather higher classifying accuracy of heterogeneous semi-supervised learning than that of supervised learning might assist in obtaining heterogeneous documents from diverse mediums and seeking plans that enhance the accuracy of document classification through its usage.

Development of $14"{\times}8.5"$ active matrix flat-panel digital x-ray detector system and Imaging performance (평판 디지털 X-ray 검출기의 개발과 성능 평가에 관한 연구)

  • Park, Ji-Koon;Choi, Jang-Yong;Kang, Sang-Sik;Lee, Dong-Gil;Seok, Dae-Woo;Nam, Sang Hee
    • Journal of radiological science and technology
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2003
  • Digital radiographic systems based on solid-state detectors, commonly referred to as flat-panel detectors, are gaining popularity in clinical practice. Large area, flat panel solid state detectors are being investigated for digital radiography. The purpose of this work was to evaluate the active matrix flat panel digital x-ray detectors in terms of their modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). In this paper, development and evaluation of a selenium-based flat-panel digital x-ray detector are described. The prototype detector has a pixel pitch of $139\;{\mu}m$ and a total active imaging area of $14{\times}8.5\;inch^2$, giving a total 3.9 million pixels. This detector include a x-ray imaging layer of amorphous selenium as a photoconductor which is evaporated in vacuum state on a TFT flat panel, to make signals in proportion to incident x-ray. The film thickness was about $500\;{\mu}m$. To evaluate the imaging performance of the digital radiography(DR) system developed in our group, sensitivity, linearity, the modulation transfer function(MTF), noise power spectrum (NPS) and detective quantum efficiency(DQE) of detector was measured. The measured sensitivity was $4.16{\times}10^6\;ehp/pixel{\cdot}mR$ at the bias field of $10\;V/{\mu}m$ : The beam condition was 41.9\;KeV. Measured MTF at 2.5\;lp/mm was 52%, and the DQE at 1.5\;lp/mm was 75%. And the excellent linearity was showed where the coefficient of determination ($r^2$) is 0.9693.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Monthly HPLC Measurements of Pigments from an Intertidal Sediment of Geunso Bay Highlighting Variations of Biomass, Community Composition and Photo-physiology of Microphytobenthos (HPLC를 이용한 근소만 조간대 퇴적물내의 저서미세조류 현존량, 군집 및 광생리의 월 변화 분석)

  • KIM, EUN YOUNG;AN, SUNG MIN;CHOI, DONG HAN;LEE, HOWON;NOH, JAE HOON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.1
    • /
    • pp.1-17
    • /
    • 2019
  • In this study, the surveys were carried out from October (2016) to October (2017) along the tidal flat of Geunso Bay, Taean Peninsula of the western edge of Korea. The sampling trips were carried out for a total of 16 times, once or twice a month. In order to investigate the monthly variation of the microphytobenthos (MPB) biomass, community composition and photo-physiology were analyzed by HPLC (High performance liquid chromatography). The total chlorophyll a (TChl a) concentrations used as an indicator of biomass of MPB in the upper 1 cm sediment layer ranged from 40.4 to $218.9mg\;m^{-2}$ throughout the sampling period. TChl a concentrations showed the maximum level on $24^{th}$ of February and remained high throughout March after which it started to declined. The biomass of MPB showed high values in winter and low values in summer. The monthly variations of Phaeophorbide a concentrations suggested that the low grazing intensity of the predator in the winter may have partly attributed to the MPB winter blooming. As a result of monthly variations of the MPB community composition using the major marker pigments, the concentrations of fucoxanthin, the marker pigment of benthic diatoms, were the highest throughout the year. The concentrations of most of the marker pigments except for chlorophyll b (chlorophytes) and peridinin (dinoflagellates) increased in winter. However, the concentrations of fucoxanthin increased the highest, and the relative ratios of the major marker pigments to TChl a except fucoxanthin decreased during the same period. The vertical distribution of Chl a and oxygen concentrations in the sediments using a fluorometer and an oxygen micro-optode Chl a concentrations decreased with oxygen concentrations with increasing depth of the sediment layers. Moreover, this tendency became more apparent in winter. The Chl a was uniformly vertical down to 12 mm from May to July, but the oxygen concentration distribution in May decreased sharply below 1 mm. The increase in phaeophorbide a concentration observed at this time is likely to be caused by increased oxygen consumption of zoobenthic grazing activities. This could be presumed that MPB cells are transported downward by bioturbation of zoobenthos. The relative ratios (DT/(DD+DT)) obtained with diadinoxanthin (DD) and diatoxanthin (DT), which are often used as indicators of photo-adaptation of MPB, decreased from October to March and increased in May. This indicated that there were monthly differences in activity of Xanthophyll cycle as well.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Estimation of Genetic Variations and Selection of Superior Lines from Diallel Crosses in Layer Chicken (산란계종의 잡종강세 이용을 위한 유전학적 기초연구와 우량교배조합 선발에 관한 연구)

  • 오봉국;한재용;손시환;박태진
    • Korean Journal of Poultry Science
    • /
    • v.13 no.1
    • /
    • pp.1-14
    • /
    • 1986
  • The subject of this study was to obtain some genetic information for developing superior layer chickens. Heterosis and combining ability effects were estimated with 5,759 progenies of full diallel crosses of 6 strains in White Leghorn. Fertility, hatchability, brooder-house viability, rearing- house viability, laying-house viability, age at 1st egg laying, body weight at 1st egg laying, average egg weight, hen-day egg production, hen-housed egg production, and feed conversion were investigated and analyzed into heterosis effect, general combining ability, specific combining ability and reciprocal effect by Grilling's model I. The results obtained were summarized as follows; 1. The general performance of each traits was 94.76% in fertility, 74.05% in hatchability, 97.47% in brooder-house viability, 99.72% in rearing-house viability, 93.81% in laying-house viability, 150 day in the age at 1st egg laying, 1,505g in the body weight at 1st egg laying, 60.08g in average egg weight, 77.11% in hen-day egg production, 269.8 eggs in hen-housed egg Production, and 2.44 in feed conversion. 2. The heterosis effects were estimated to -0.66%, 9.58%, 0.26%, 1.83%, -3.87%, 3.63%, 0.96%, 4.23%, 6.4%, and -0.8%, in fertility, hatchability, brooder-house viability, laying-house viability, the age at 1st egg laying, the body weight at 1st egg laying, average egg weight, hen-day egg Production, hen-housed egg production and feed conversion, respectively. 3. The results obtained from analysis of combining ability were as follows ; 1) Estimates of general combining ability, specific combining ability and reciprocal effects were not high in fertility. It was considered that fertility was mainly affected by environmental factors. In the hatchability, the general combining ability was more important than specific combining ability and reciprocal effects, and the superior strains were K and V which the additive genetic effects were very high. 2) In the brooder-house viability and laying-house viability, specific combining ability and reciprocal effects appeared to be important and the combinations of K${\times}$A and A${\times}$K were very superior. 3) In the feed conversion and average egg weight, general combining ability was more important compared with specific combining ability and reciprocal effects. On the basis of combining ability the superior strains were F, K and B in feed conversion, F and B in the average egg weight. 4) General combining ability, specific combining ability and reciprocal effects were important in the age at 1st egg laying and the combination of V ${\times}$F, F${\times}$K and B${\times}$F were very useful on the basis of these effects. In the body weight at 1st egg laying, general combining ability was more important than specific combining ability and reciprocal effects, relatively. The K, F and E strains were recommended to develop the light strain in the body weight at 1st egg laying. 5) General combining ability, specific combining ability and reciprocal effects were important in the hen-day egg production and hen-housed egg production. The combinations of F${\times}$K, A${\times}$K, and K${\times}$A were proper for developing these traits. 4. In general, high general combining ability effects were estimated for hatchability, body weight at 1st egg laying, average egg weight, hen-day egg production, hen-housed egg production, and feed conversion and high specific combining ability effects for brooder-house viability, laying house viability, age at 1st egg laying, hen-day egg production and hen-housed egg production, and high reciprocal effects for the age at 1st egg laying.

  • PDF

Studies on the Determination Method of Natural Sweeteners in Foods - Licorice Extract and Erythritol (식품 중 감초추출물 및 에리스리톨 분석법에 관한 연구)

  • Hong Ki-Hyoung;Lee Tal-Soo;Jang Yaung-Mi;Park Sung-Kwan;Park Sung-Kug;Kwon Yong-Kwan;Jang Sun-Yaung;Han Ynun-Jeong;Won Hye-Jin;Hwang Hye-Shin;Kim Byung-Sub;Kim Eun-Jung;Kim Myung-Chul
    • Journal of Food Hygiene and Safety
    • /
    • v.20 no.4
    • /
    • pp.258-266
    • /
    • 2005
  • Licorice Extract and Erythritol, food additives used in korea, are widely used in foods as sweetener. Its application for use in food is regulated by the standard and specification for food additives but official analytical method far determination of these sweetener in food has not been established. Accordingly, we has been carried out to set up analytical method of the glycyrrhizic acid in several foods by the way of thin layer chromatography and high performance liquid chromatography glycyrrhizic acid is qualitative anaylsis technique consists of clean-up with a sep-pak $C_{18}$ cartridge, separation of the sweeteners by Silica gel 60 F254 TLC plate using 1-butanol:4Nammonia solution:ethanol (50:20:10) as mobile solvent. Also, the quantitative analysis for glycyrrhizic acid, was performed using Capcell prk $C_{18}$ column at wavelength 254nm and DW:Acetonitrile (62:38 (pH2.5)) as mobile phase. and we has been carried out to set up analytical method of the erythritol in several foods by the way of high performance liquid chromatography. erythritol is qualitative anaylsis technique consists of clean-up with a DW and hexane. The quantitative analysis for erythritol, was performed using Asahipak NH2P-50 column, Rl and DW:Acetonitrile (25:75) as mobile phase. The glycyrrhizic acid results determined as glycyrrhizic acid in 105 items were as follows; N.D$\∼$48.7ppm for 18 items in soy sauce, N.D$\∼$5.3ppm for 12 items in sauce, N.D$\∼$988.93ppm for 15 items in health food, N.D$\∼$180.7ppm for 26 items in beverages, N.D$\∼$2.6ppm for 8 items in alcoholic beverages repectively and ND for 63 items in the ethers. The erythritol results determined as erythritol in 52 items were as follows; N.D$\∼$155.6ppm for 13 items in gm, N.D$\∼$398.1ppm for 12 items in health foods repectively and ND for 45 items in the others.