• Title/Summary/Keyword: Optimization Process

Search Result 4,794, Processing Time 0.035 seconds

Optimal Operation of Gas Engine for Biogas Plant in Sewage Treatment Plant (하수처리장 바이오가스 플랜트의 가스엔진 최적 운영 방안)

  • Kim, Gill Jung;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.2
    • /
    • pp.18-35
    • /
    • 2019
  • The Korea District Heating Corporation operates a gas engine generator with a capacity of $4500m^3 /day$ of biogas generated from the sewage treatment plant of the Nanji Water Recycling Center and 1,500 kW. However, the actual operation experience of the biogas power plant is insufficient, and due to lack of accumulated technology and know-how, frequent breakdown and stoppage of the gas engine causes a lot of economic loss. Therefore, it is necessary to prepare technical fundamental measures for stable operation of the power plant In this study, a series of process problems of the gas engine plant using the biogas generated in the sewage treatment plant of the Nanji Water Recovery Center were identified and the optimization of the actual operation was made by minimizing the problems in each step. In order to purify the gas, which is the main cause of the failure stop, the conditions for establishing the quality standard of the adsorption capacity of the activated carbon were established through the analysis of the components and the adsorption test for the active carbon being used at present. In addition, the system was applied to actual operation by applying standards for replacement cycle of activated carbon to minimize impurities, strengthening measurement period of hydrogen sulfide, localization of activated carbon, and strengthening and improving the operation standards of the plant. As a result, the operating performance of gas engine # 1 was increased by 530% and the operation of the second engine was increased by 250%. In addition, improvement of vent line equipment has reduced work process and increased normal operation time and operation rate. In terms of economic efficiency, it also showed a sales increase of KRW 77,000 / year. By applying the strengthening and improvement measures of operating standards, it is possible to reduce the stoppage of the biogas plant, increase the utilization rate, It is judged to be an operational plan.

Optimization of Characteristic Change due to Differences in the Electrode Mixing Method (전극 혼합 방식의 차이로 인한 특성 변화 최적화)

  • Jeong-Tae Kim;Carlos Tafara Mpupuni;Beom-Hui Lee;Sun-Yul Ryou
    • Journal of the Korean Electrochemical Society
    • /
    • v.26 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • The cathode, which is one of the four major components of a lithium secondary battery, is an important component responsible for the energy density of the battery. The mixing process of active material, conductive material, and polymer binder is very essential in the commonly used wet manufacturing process of the cathode. However, in the case of mixing conditions of the cathode, since there is no systematic method, in most cases, differences in performance occur depending on the manufacturer. Therefore, LiMn2O4 (LMO) cathodes were prepared using a commonly used THINKY mixer and homogenizer to optimize the mixing method in the cathode slurry preparation step, and their characteristics were compared. Each mixing condition was performed at 2000 RPM and 7 min, and to determine only the difference in the mixing method during the manufacture of the cathode other experiment conditions (mixing time, material input order, etc.) were kept constant. Among the manufactured THINKY mixer LMO (TLMO) and homogenizer LMO (HLMO), HLMO has more uniform particle dispersion than TLMO, and thus shows higher adhesive strength. Also, the result of the electrochemical evaluation reveals that HLMO cathode showed improved performance with a more stable life cycle compared to TLMO. The initial discharge capacity retention rate of HLMO at 69 cycles was 88%, which is about 4.4 times higher than that of TLMO, and in the case of rate capability, HLMO exhibited a better capacity retention even at high C-rates of 10, 15, and 20 C and the capacity recovery at 1 C was higher than that of TLMO. It's postulated that the use of a homogenizer improves the characteristics of the slurry containing the active material, the conductive material, and the polymer binder creating an electrically conductive network formed by uniformly dispersing the conductive material suppressing its strong electrostatic properties thus avoiding aggregation. As a result, surface contact between the active material and the conductive material increases, electrons move more smoothly, changes in lattice volume during charging and discharging are more reversible and contact resistance between the active material and the conductive material is suppressed.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Dose Evaluation of TPS according to Treatment Sites in IMRT (세기조절방사선치료 시 치료 부위에 따른 치료계획 시스템 간 선량평가)

  • Kim, Jin Man;Kim, Jong Sik;Hong, Chae Seon;Park, Ju Young;Park, Su Yeon;Ju, Sang Gyu
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.181-186
    • /
    • 2013
  • Purpose: This study executed therapy plans on prostate cancer (homogeneous density area) and lung cancer (non-homogeneous density area) using radiation treatment planning systems such as $Pinnacle^3$ (version 9.2, Philips Medical Systems, USA) and Eclipse (version 10.0, Varian Medical Systems, USA) in order to quantify the difference between dose calculation according to density in IMRT. Materials and Methods: The subjects were prostate cancer patients (n=5) and lung cancer patients (n=5) who had therapies in our hospital. Identical constraints and optimization process according to the Protocol were administered on the subjects. For the therapy plan of prostate cancer patients, 10 MV and 7Beam were used and 2.5 Gy was prescribed in 28 fx to make 70 Gy in total. For lung cancer patients, 6 MV and 6Beam were used and 2 Gy was prescribed in 33 fx to make 66 Gy in total. Through two therapy planning systems, maximum dose, average dose, and minimum dose of OAR (Organ at Risk) of CTV, PTV and around tumor were investigated. Results: In prostate cancer, both therapy planning systems showed within 2% change of dose of CTV and PTV and normal organs (Bladder, Both femur and Rectum out) near the tumor satisfied the dose constraints. In lung cancer, CTV and PTV showed less than 2% changes in dose and normal organs (Esophagus, Spinal cord and Both lungs) satisfied dose restrictions. However, the minimum dose of Eclipse therapy plan was 1.9% higher in CTV and 3.5% higher in PTV, and in case of both lungs there was 3.0% difference at V5 Gy. Conclusion: Each TPS according to the density satisfied dose limits of our hospital proving the clinical accuracy. It is considered more accurate and precise therapy plan can be made if studies on treatment planning for diverse parts and the application of such TPS are made.

  • PDF

N- and P-doping of Transition Metal Dichalcogenide (TMD) using Artificially Designed DNA with Lanthanide and Metal Ions

  • Kang, Dong-Ho;Park, Jin-Hong
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.292-292
    • /
    • 2016
  • Transition metal dichalcogenides (TMDs) with a two-dimensional layered structure have been considered highly promising materials for next-generation flexible, wearable, stretchable and transparent devices due to their unique physical, electrical and optical properties. Recent studies on TMD devices have focused on developing a suitable doping technique because precise control of the threshold voltage ($V_{TH}$) and the number of tightly-bound trions are required to achieve high performance electronic and optoelectronic devices, respectively. In particular, it is critical to develop an ultra-low level doping technique for the proper design and optimization of TMD-based devices because high level doping (about $10^{12}cm^{-2}$) causes TMD to act as a near-metallic layer. However, it is difficult to apply an ion implantation technique to TMD materials due to crystal damage that occurs during the implantation process. Although safe doping techniques have recently been developed, most of the previous TMD doping techniques presented very high doping levels of ${\sim}10^{12}cm^{-2}$. Recently, low-level n- and p-doping of TMD materials was achieved using cesium carbonate ($Cs_2CO_3$), octadecyltrichlorosilane (OTS), and M-DNA, but further studies are needed to reduce the doping level down to an intrinsic level. Here, we propose a novel DNA-based doping method on $MoS_2$ and $WSe_2$ films, which enables ultra-low n- and p-doping control and allows for proper adjustments in device performance. This is achieved by selecting and/or combining different types of divalent metal and trivalent lanthanide (Ln) ions on DNA nanostructures. The available n-doping range (${\Delta}n$) on the $MoS_2$ by Ln-DNA (DNA functionalized by trivalent Ln ions) is between $6{\times}10^9cm^{-2}$ and $2.6{\times}10^{10}cm^{-2}$, which is even lower than that provided by pristine DNA (${\sim}6.4{\times}10^{10}cm^{-2}$). The p-doping change (${\Delta}p$) on $WSe_2$ by Ln-DNA is adjusted between $-1.0{\times}10^{10}cm^{-2}$ and $-2.4{\times}10^{10}cm^{-2}$. In the case of Co-DNA (DNA functionalized by both divalent metal and trivalent Ln ions) doping where $Eu^{3+}$ or $Gd^{3+}$ ions were incorporated, a light p-doping phenomenon is observed on $MoS_2$ and $WSe_2$ (respectively, negative ${\Delta}n$ below $-9{\times}10^9cm^{-2}$ and positive ${\Delta}p$ above $1.4{\times}10^{10}cm^{-2}$) because the added $Cu^{2+}$ ions probably reduce the strength of negative charges in Ln-DNA. However, a light n-doping phenomenon (positive ${\Delta}n$ above $10^{10}cm^{-2}$ and negative ${\Delta}p$ below $-1.1{\times}10^{10}cm^{-2}$) occurs in the TMD devices doped by Co-DNA with $Tb^{3+}$ or $Er^{3+}$ ions. A significant (factor of ~5) increase in field-effect mobility is also observed on the $MoS_2$ and $WSe_2$ devices, which are, respectively, doped by $Tb^{3+}$-based Co-DNA (n-doping) and $Gd^{3+}$-based Co-DNA (p-doping), due to the reduction of effective electron and hole barrier heights after the doping. In terms of optoelectronic device performance (photoresponsivity and detectivity), the $Tb^{3+}$ or $Er^{3+}$-Co-DNA (n-doping) and the $Eu^{3+}$ or $Gd^{3+}$-Co-DNA (p-doping) improve the $MoS_2$ and $WSe_2$ photodetectors, respectively.

  • PDF

Process Optimization of Dextran Production by Leuconostoc sp. strain YSK. Isolated from Fermented Kimchi (김치로부터 분리된 Leuconostoc sp. strain YSK 균주에 의한 덱스트란 생산 조건의 최적화)

  • Hwang, Seung-Kyun;Hong, Jun-Taek;Jung, Kyung-Hwan;Chang, Byung-Chul;Hwang, Kyung-Suk;Shin, Jung-Hee; Yim, Sung-Paal;Yoo, Sun-Kyun
    • Journal of Life Science
    • /
    • v.18 no.10
    • /
    • pp.1377-1383
    • /
    • 2008
  • A bacterium producing non- or partially digestible dextran was isolated from kimchi broth by enrichment culture technique. The bacterium was identified tentatively as Leuconostoc sp. strain SKY. We established the response surface methodology (Box-Behnken design) to optimize the principle parameters such as culture pH, temperature, and yeast extract concentration for maximizing production of dextran. The ranges of parameters were determined based on prior screening works done at our laboratory and accordingly chosen as 5.5, 6.5, and 7.5 for pH, 25, 30, and $35^{\circ}C$ for temperature, and 1, 5, and 9 g/l yeast extract. Initial concentration of sucrose was 100 g/l. The mineral medium consisted of 3.0 g $KH_2PO_4$, 0.01 g $FeSO_4{\cdot}H_2O$, 0.01 g $MnSO_4{\cdot}4H_2O$, 0.2 g $MgSO_4{\cdot}7H_2O$, 0.01 g NaCl, and 0.05 g $CaCO_3$ per 1 liter deionized water. The optimum values of pH and temperature, and yeast extract concentration were obtained at pH (around 7.0), temperature (27 to $28^{\circ}C$), and yeast extract (6 to 7 g/l). The best dextran yield was 60% (dextran/g sucrose). The best dextran productivity was 0.8 g/h-l.