• Title/Summary/Keyword: Division of Information System

Search Result 5,443, Processing Time 0.042 seconds

Sea Surface pCO2 and Its Variability in the Ulleung Basin, East Sea Constrained by a Neural Network Model (신경망 모델로 구성한 동해 울릉분지 표층 이산화탄소 분압과 변동성)

  • PARK, SOYEONA;LEE, TONGSUP;JO, YOUNG-HEON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.21 no.1
    • /
    • pp.1-10
    • /
    • 2016
  • Currently available surface seawater partial pressure carbon dioxide ($pCO_2$) data sets in the East Sea are not enough to quantify statistically the carbon dioxide flux through the air-sea interface. To complement the scarcity of the $pCO_2$ measurements, we construct a neural network (NN) model based on satellite data to map $pCO_2$ for the areas, which were not observed. The NN model is constructed for the Ulleung Basin, where $pCO_2$ data are best available, to map and estimate the variability of $pCO_2$ based on in situ $pCO_2$ for the years from 2003 to 2012, and the sea surface temperature (SST) and chlorophyll data from the MODIS (Moderate-resolution Imaging Spectroradiometer) sensor of the Aqua satellite along with geographic information. The NN model was trained to achieve higher than 95% of a correlation between in situ and predicted $pCO_2$ values. The RMSE (root mean square error) of the NN model output was $19.2{\mu}atm$ and much less than the variability of in situ $pCO_2$. The variability of $pCO_2$ with respect to SST and chlorophyll shows a strong negative correlation with SST than chlorophyll. As SST decreases the variability of $pCO_2$ increases. When SST is lower than $15^{\circ}C$, $pCO_2$ variability is clearly affected by both SST and chlorophyll. In contrast when SST is higher than $15^{\circ}C$, the variability of $pCO_2$ is less sensitive to changes in SST and chlorophyll. The mean rate of the annual $pCO_2$ increase estimated by the NN model output in the Ulleung Basin is $0.8{\mu}atm\;yr^{-1}$ from 2003 to 2014. As NN model can successfully map $pCO_2$ data for the whole study area with a higher resolution and less RMSE compared to the previous studies, the NN model can be a potentially useful tool for the understanding of the carbon cycle in the East Sea, where accessibility is limited by the international affairs.

Evaluation of Cabbage- and Broccoli-genetic Resources for Resistance to Clubroot and Fusarium Wilt (뿌리혹병 및 시들음병에 대한 저항성 양배추와 브로콜리 유전자원 탐색)

  • Lee, Ji Hyun;Jo, Eun Ju;Jang, Kyoung Soo;Choi, Yong Ho;Kim, Jin-Cheol;Choi, Gyung Ja
    • Research in Plant Disease
    • /
    • v.20 no.4
    • /
    • pp.235-244
    • /
    • 2014
  • Clubroot and Fusarium wilt of cole crops (Brassica oleracea L.) are destructive diseases which for many years has brought a decline in quality and large losses in yields all over the world. The breeding of resistant cultivars is an effective approach to reduce the use of chemical fungicides and minimize crop losses. This study was conducted to evaluate the resistance of 60 cabbage (B. oleracea var. capitata) and 6 broccoli (B. oleracea var. italica) lines provided by The RDA-Genebank Information Center to clubroot and Fusarium wilt. To investigate resistance to clubroot, seedlings of the genetic resources were inoculated with Plasmodiophora brassicae by drenching the roots with a mixed spore suspension (1 : 1) of two isolates. Of the tested genetic resources, four cabbage lines were moderately resistant and 'K166220' represented the highest resistance to P. brassicae. The others were susceptible to clubroot. On the other hand, to select resistant plants to Fusarium wilt, the genetic resources were inoculated with Fusarium oxysporum f. sp. conglutinans by dipping the roots in spore suspension of the fungus. Among them, 17 cabbage and 5 broccoli lines were resistant, 16 cabbage lines were moderately resistant, and the others were susceptible to Fusarium wilt. Especially, three cabbage ('IT227115', 'K161791', 'K173350') and two broccoli ('IT227100', 'IT227099') lines were highly resistant to the fungus. We suggest that the resistant genetic resources can be used as a basic material for resistant B. oleracea breeding system against clubroot and Fusarium wilt.

Review of Domestic Research Trends on Layered Double Hydroxide (LDH) Materials: Based on Research Articles in Korean Citation Index (KCI) (이중층수산화물(layered double hydroxide, LDH) 소재의 국내 연구동향 리뷰: 한국학술지인용색인(KCI)에 발표된 논문을 대상으로)

  • Seon Yong Lee;YoungJae Kim;Young Jae Lee
    • Economic and Environmental Geology
    • /
    • v.56 no.1
    • /
    • pp.23-53
    • /
    • 2023
  • In this review paper, previous studies on layered double hydroxides (LDHs) published in the Korean Citation Index (KCI) were examined to investigate a research trend for LDHs in Korea. Since the first publication in 2002, 160 papers on LDHs have been published until January 2023. Among the 31 academic fields, top 5 fields appeared in the order of chemical engineering, chemistry, materials engineering, environmental engineering, and physics. The chemical engineering shows the highest record of published paper (71 papers) while around 10 papers have been published in the other four fields. All papers were reclassified into 15 research fields based on the industrial and academic purposes of using LDHs. The top 5 in these fields are in order of environmental purification materials, polymer catalyst materials, battery materials, pharmaceutical/medicinal materials, and basic physicochemical properties. These findings suggest that researches on the applications of LDH materials in the academic fields of chemical engineering and chemistry for the improvement of their functions such as environmental purification materials, polymer catalysts, and batteries have been being most actively conducted. The application of LDHs for cosmetic and agricultural purposes and for developing environmental sensors is still at the beginning of research. Considering a market-potential and high-efficiency-eco-friendly trend, however, it will deserve our attention as emerging application fields in the future. All reclassified papers were summarized in our tables and a supplementary file, including information on applied materials, key results, characteristics and synthesis methods of LDHs used. We expect that our findings of overall trends in LDH research in Korea can help design future researches with LDHs and suggest policies for resources and energies as well as environments efficiently.

A 0.31pJ/conv-step 13b 100MS/s 0.13um CMOS ADC for 3G Communication Systems (3G 통신 시스템 응용을 위한 0.31pJ/conv-step의 13비트 100MS/s 0.13um CMOS A/D 변환기)

  • Lee, Dong-Suk;Lee, Myung-Hwan;Kwon, Yi-Gi;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.3
    • /
    • pp.75-85
    • /
    • 2009
  • This work proposes a 13b 100MS/s 0.13um CMOS ADC for 3G communication systems such as two-carrier W-CDMA applications simultaneously requiring high resolution, low power, and small size at high speed. The proposed ADC employs a four-step pipeline architecture to optimize power consumption and chip area at the target resolution and sampling rate. Area-efficient high-speed high-resolution gate-bootstrapping circuits are implemented at the sampling switches of the input SHA to maintain signal linearity over the Nyquist rate even at a 1.0V supply operation. The cascode compensation technique on a low-impedance path implemented in the two-stage amplifiers of the SHA and MDAC simultaneously achieves the required operation speed and phase margin with more reduced power consumption than the Miller compensation technique. Low-glitch dynamic latches in sub-ranging flash ADCs reduce kickback-noise referred to the differential input stage of the comparator by isolating the input stage from output nodes to improve system accuracy. The proposed low-noise current and voltage references based on triple negative T.C. circuits are employed on chip with optional off-chip reference voltages. The prototype ADC in a 0.13um 1P8M CMOS technology demonstrates the measured DNL and INL within 0.70LSB and 1.79LSB, respectively. The ADC shows a maximum SNDR of 64.5dB and a maximum SFDR of 78.0dB at 100MS/s, respectively. The ABC with an active die area of $1.22mm^2$ consumes 42.0mW at 100MS/s and a 1.2V supply, corresponding to a FOM of 0.31pJ/conv-step.

Current status of Brassica A genome analysis (Brassica A genome의 최근 연구 동향)

  • Choi, Su-Ryun;Kwon, Soo-Jin
    • Journal of Plant Biotechnology
    • /
    • v.39 no.1
    • /
    • pp.33-48
    • /
    • 2012
  • As a scientific curiosity to understand the structure and the function of crops and experimental efforts to apply it to plant breeding, genetic maps have been constructed in various crops. Especially, in the case of Brassica crop, genetic mapping has been accelerated since genetic information of model plant $Arabidopsis$ was available. As a result, the whole $B.$ $rapa$ genome (A genome) sequencing has recently been done. The genome sequences offer opportunities to develop molecular markers for genetic analysis in $Brassica$ crops. RFLP markers are widely used as the basis for genetic map construction, but detection system is inefficiency. The technical efficiency and analysis speed of the PCR-based markers become more preferable for many form of $Brassica$ genome study. The massive sequence informative markers such as SSR, SNP and InDels are also available to increase the density of markers for high-resolution genetic analysis. The high density maps are invaluable resources for QTLs analysis, marker assisted selection (MAS), map-based cloning and comparative analysis within $Brassica$ as well as related crop species. Additionally, the advents of new technology, next-generation technique, have served as a momentum for molecular breeding. Here we summarize genetic and genomic resources and suggest their applications for the molecular breeding in $Brassica$ crop.

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

Analysis of Rice Blast Outbreaks in Korea through Text Mining (텍스트 마이닝을 통한 우리나라의 벼 도열병 발생 개황 분석)

  • Song, Sungmin;Chung, Hyunjung;Kim, Kwang-Hyung;Kim, Ki-Tae
    • Research in Plant Disease
    • /
    • v.28 no.3
    • /
    • pp.113-121
    • /
    • 2022
  • Rice blast is a major plant disease that occurs worldwide and significantly reduces rice yields. Rice blast disease occurs periodically in Korea, causing significant socio-economic damage due to the unique status of rice as a major staple crop. A disease outbreak prediction system is required for preventing rice blast disease. Epidemiological investigations of disease outbreaks can aid in decision-making for plant disease management. Currently, plant disease prediction and epidemiological investigations are mainly based on quantitatively measurable, structured data such as crop growth and damage, weather, and other environmental factors. On the other hand, text data related to the occurrence of plant diseases are accumulated along with the structured data. However, epidemiological investigations using these unstructured data have not been conducted. The useful information extracted using unstructured data can be used for more effective plant disease management. This study analyzed news articles related to the rice blast disease through text mining to investigate the years and provinces where rice blast disease occurred most in Korea. Moreover, the average temperature, total precipitation, sunshine hours, and supplied rice varieties in the regions were also analyzed. Through these data, it was estimated that the primary causes of the nationwide outbreak in 2020 and the major outbreak in Jeonbuk region in 2021 were meteorological factors. These results obtained through text mining can be combined with deep learning technology to be used as a tool to investigate the epidemiology of rice blast disease in the future.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.