• Title/Summary/Keyword: Application Fields

Search Result 3,190, Processing Time 0.028 seconds

Mineral Nutrition of the Field-Grown Rice Plant -[I] Recovery of Fertilizer Nitrogen, Phosphorus and Potassium in Relation to Nutrient Uptake, Grain and Dry Matter Yield- (포장재배(圃場栽培) 수도(水稻)의 무기영양(無機營養) -[I] 삼요소이용률(三要素利用率)과 양분흡수량(養分吸收量), 수량(收量) 및 건물생산량(乾物生産量)과(乾物生産量)의 관계(關係)-)

  • Park, Hoon
    • Applied Biological Chemistry
    • /
    • v.16 no.2
    • /
    • pp.99-111
    • /
    • 1973
  • Percentage recovery or fertilizer nitrogen, phosphorus and potassium by rice plant(Oriza sativa L.) were investigated at 8, 10, 12, 14 kg/10a of N, 6 kg of $P_2O_5$ and 8 kg of $K_2O$ application level in 1967 (51 places) and 1968 (32 places). Two types of nutrient contribution for the yield, that is, P type in which phosphorus firstly increases silicate uptake and secondly silicate increases nitrogen uptake, and K type in which potassium firstly increases P uptake and secondly P increases nitrogen uptake were postulated according to the following results from the correlation analyses (linear) between percentage recovery of fertilizer nutrient and grain or dry matter yields and nutrient uptake. 1. Percentage frequency of minus or zero recovery occurrence was 4% in nitrogen, 48% in phosphorus and 38% in potassium. The frequency distribution of percentage recovery appeared as a normal distribution curve with maximum at 30 to 40 recovery class in nitrogen, but appeared as a show distribution with maximum at below zero class in phosphorus and potassium. 2. Percentage recovery (including only above zero) was 33 in N (above 10kg/10a), 27 in P, 40 in K in 1967 and 40 in N, 20 in P, 46 in Kin 1968. Mean percentage recovery of two years including zero for zero or below zero was 33 in N, 13 in P and 27 in K. 3. Standard deviation of percentage recovery was greater than percentage recovery in P and K and annual variation of CV (coefficient of variation) was greatest in P. 4. The frequency of significant correlation between percentage recovery and grain or dry matter yield was highest in N and lowest in P. Percentage recovery of nitrogen at 10 kg level has significant correlation only with percentage recovery of P in 1967 and only with that of potassium in 1968. 5. The correlation between percentage recovery and dry matter yield of all treatments showed only significant in P in 1967, and only significant in K in 1968, Negative correlation coefficients between percentage recovery and grain or dry matter yield of no or minus fertilizer plots were shown only in K in 1967 and only in P in 1968 indicating that phosphorus fertilizer gave a distinct positive role in 1967 but somewhat' negative role in 1968 while potassium fertilizer worked positively in 1968 but somewhat negatively in 1967. 6. The correlation between percentage recovery of nutrient and grain yield showed similar tendency as with dry matter yield but lower coefficients. Thus the role of nutrients was more precisely expressed through dry matter yield. 7. Percentage recovery of N very frequently had significant correlation with nitrogen uptake of nitrogen applied plot, and significant negative correlation with nitrogen uptake of minus nitrogen plot, and less frequently had significant correlation with P, K and Si uptake of nitrogen applied plot. 8. Percentage recovery of P had significant correlation with Si uptake of all treatments and with N uptake of all treatments except minus phosphorus plot in 1967 indicating that phosphorus application firstly increases Si uptake and secondly silicate increases nitrogen uptake. Percentage recovery of P also frequently had significant correlation with P or K uptake of nitrogen applied plot. 9. Percentage recovery of K had significant correlation with P uptake of all treatments, N uptake of all treatments except minus phosphorus plot, and significant negative correlation with K uptake of minus K plot and with Si uptake of no fertilizer plot or the highest N applied plot in 1968, and negative correlation coefficient with P uptake of no fertilizer or minus nutrient plot in 1967. Percentage recovery of K had higher correlation coefficients with dry matter yield or grain yield than with K uptake. The above facts suggest that K application firstly increases P uptake and secondly phosphorus increases nitrogen uptake for dry matter yied. 10. Percentage recovery of N had significant higher correlation coefficient with grain yield or dry matter yield of minus K plot than with those of minus phosphorus plot, and had higher with those of fertilizer plot than with those of minus K plot. Similar tendency was observed between N uptake and percentage recovery of N among the above treatments. Percentage recovery of K had negative correlation coefficient with grain or-dry matter yield of no fertilizer plot or minus nutrient plot. These facts reveal that phosphorus increases nitrogen uptake and when phosphorus or nitrogen is insufficient potassium competatively inhibits nitrogen uptake. 11. Percentage recovery of N, Pand K had significant negative correlation with relative dry matter yield of minus phosphorus plot (yield of minus plot x 100/yield of complete plot; in 1967 and with relative grain yield of minus K plot in 1968. These results suggest that phosphorus affects tillering or vegetative phase more while potassium affects grain formation or Reproductive phase more, and that clearly show the annual difference of P and K fertilizer effect according to the weather. 12. The correlation between percentage recovery of fertilizer and the relative yield of minus nutrient plat or that of no fertilizer plot to that of minus nutrient plot indicated that nitrogen is the most effective factor for the production even in the minus P or K plot. 13. From the above facts it could be concluded that about 40 to 50 percen of paddy fields do rot require P or K fertilizer and even in the case of need the application amount should be greatly different according to field and weather of the year, especially in phosphorus.

  • PDF

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Brief Introduction of Research Progresses in Control and Biocontrol of Clubroot Disease in China

  • He, Yueqiu;Wu, Yixin;He, Pengfei;Li, Xinyu
    • 한국균학회소식:학술대회논문집
    • /
    • 2015.05a
    • /
    • pp.45-46
    • /
    • 2015
  • Clubroot disease of crucifers has occurred since 1957. It has spread to the whole China, especially in the southwest and nourtheast where it causes 30-80% loss in some fields. The disease has being expanded in the recent years as seeds are imported and the floating seedling system practices. For its effective control, the Ministry of Agriculture of China set up a program in 2010 and a research team led by Dr. Yueqiu HE, Yunnan Agricultural University. The team includes 20 main reseachers of 11 universities and 5 institutions. After 5 years, the team has made a lot of progresses in disease occurrence regulation, resources collection, resistance identification and breeding, biological agent exploration, formulation, chemicals evaluation, and control strategy. About 1200 collections of local and commercial crucifers were identified in the field and by artificiall inoculation in the laboratories, 10 resistant cultivars were breeded including 7 Chinese cabbages and 3 cabbages. More than 800 antagostic strains were isolated including bacteria, stretomyces and fungi. Around 100 chemicals were evaluated in the field and greenhouse based on its control effect, among them, 6 showed high control effect, especially fluazinam and cyazofamid could control about 80% the disease. However, fluzinam has negative effect on soil microbes. Clubroot disease could not be controlled by bioagents and chemicals once when the pathogen Plasmodiophora brassicae infected its hosts and set up the parasitic relationship. We found the earlier the pathogent infected its host, the severer the disease was. Therefore, early control was the most effective. For Chinese cabbage, all controlling measures should be taken in the early 30 days because the new infection could not cause severe symptom after 30 days of seeding. For example, a biocontrol agent, Bacillus subtilis Strain XF-1 could control the disease 70%-85% averagely when it mixed with seedling substrate and was drenching 3 times after transplanting, i.e. immediately, 7 days, 14 days. XF-1 has been deeply researched in control mechanisms, its genome, and development and application of biocontrol formulate. It could produce antagonistic protein, enzyme, antibiotics and IAA, which promoted rhizogenesis and growth. Its The genome was sequenced by Illumina/Solexa Genome Analyzer to assembled into 20 scaffolds then the gaps between scaffolds were filled by long fragment PCR amplification to obtain complet genmone with 4,061,186 bp in size. The whole genome was found to have 43.8% GC, 108 tandem repeats with an average of 2.65 copies and 84 transposons. The CDSs were predicted as 3,853 in which 112 CDSs were predicted to secondary metabolite biosynthesis, transport and catabolism. Among those, five NRPS/PKS giant gene clusters being responsible for the biosynthesis of polyketide (pksABCDEFHJLMNRS in size 72.9 kb), surfactin(srfABCD, 26.148 kb, bacilysin(bacABCDE 5.903 kb), bacillibactin(dhbABCEF, 11.774 kb) and fengycin(ppsABCDE, 37.799 kb) have high homolgous to fuction confirmed biosynthesis gene in other strain. Moreover, there are many of key regulatory genes for secondary metabolites from XF-1, such as comABPQKX Z, degQ, sfp, yczE, degU, ycxABCD and ywfG. were also predicted. Therefore, XF-1 has potential of biosynthesis for secondary metabolites surfactin, fengycin, bacillibactin, bacilysin and Bacillaene. Thirty two compounds were detected from cell extracts of XF-1 by MALDI-TOF-MS, including one Macrolactin (m/z 441.06), two fusaricidin (m/z 850.493 and 968.515), one circulocin (m/z 852.509), nine surfactin (m/z 1044.656~1102.652), five iturin (m/z 1096.631~1150.57) and forty fengycin (m/z 1449.79~1543.805). The top three compositions types (contening 56.67% of total extract) are surfactin, iturin and fengycin, in which the most abundant is the surfactin type composition 30.37% of total extract and in second place is the fengycin with 23.28% content with rich diversity of chemical structure, and the smallest one is the iturin with 3.02% content. Moreover, the same main compositions were detected in Bacillus sp.355 which is also a good effects biocontol bacterial for controlling the clubroot of crucifer. Wherefore those compounds surfactin, iturin and fengycin maybe the main active compositions of XF-1 against P. brassicae. Twenty one fengycin type compounds were evaluate by LC-ESI-MS/MS with antifungal activities, including fengycin A $C_{16{\sim}C19}$, fengycin B $C_{14{\sim}C17}$, fengycin C $C_{15{\sim}C18}$, fengycin D $C_{15{\sim}C18}$ and fengycin S $C_{15{\sim}C18}$. Furthermore, one novel compound was identified as Dehydroxyfengycin $C_{17}$ according its MS, 1D and 2D NMR spectral data, which molecular weight is 1488.8480 Da and formula $C_{75}H_{116}N_{12}O_{19}$. The fengycin type compounds (FTCPs $250{\mu}g/mL$) were used to treat the resting spores of P. brassicae ($10^7/mL$) by detecting leakage of the cytoplasm components and cell destruction. After 12 h treatment, the absorbencies at 260 nm (A260) and at 280 nm (A280) increased gradually to approaching the maximum of absorbance, accompanying the collapse of P. brassicae resting spores, and nearly no complete cells were observed at 24 h treatment. The results suggested that the cells could be lyzed by the FTCPs of XF-1, and the diversity of FTCPs was mainly attributed to a mechanism of clubroot disease biocontrol. In the five selected medium MOLP, PSA, LB, Landy and LD, the most suitable for growth of strain medium is MOLP, and the least for strains longevity is the Landy sucrose medium. However, the lipopeptide highest yield is in Landy sucrose medium. The lipopeptides in five medium were analyzed with HPLC, and the results showed that lipopeptides component were same, while their contents from B. subtilis XF-1 fermented in five medium were different. We found that it is the lipopeptides content but ingredients of XF-1 could be impacted by medium and lacking of nutrition seems promoting lipopeptides secretion from XF-1. The volatile components with inhibition fungal Cylindrocarpon spp. activity which were collect in sealed vesel were detected with metheds of HS-SPME-GC-MS in eight biocontrol Bacillus species and four positive mutant strains of XF-1 mutagenized with chemical mutagens, respectively. They have same main volatile components including pyrazine, aldehydes, oxazolidinone and sulfide which are composed of 91.62% in XF-1, in which, the most abundant is the pyrazine type composition with 47.03%, and in second place is the aldehydes with 23.84%, and the third place is oxazolidinone with 15.68%, and the smallest ones is the sulfide with 5.07%.

  • PDF

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

The Effect of Lime Application after Cultivating Winter Forage Crops on the Change of Major Characters and Yield of Peanut (동계사료작물 재배후 석회물질 시용이 땅콩의 주요 형질 및 수량에 미치는 영향)

  • Kim, Dae-Hyang;Chim, Jae-Seong
    • The Journal of Natural Sciences
    • /
    • v.7
    • /
    • pp.103-114
    • /
    • 1995
  • These experiments were conducted for decrease of injury by continuous cropping in the peanut fields of Chonbuk Wangkungarea. The continuous cropping field for four years was used in this experiment. Italian ryegrass and rye were cultivated andlime materials were distributed for improvement of soil fertility. The results were as follows; 1. Forage crops were cultivatedand lime materials were distributed on the continuous cropping field of peanut. The organic matter content of the expermentalplot cultivating Italian ryegrass was only 1.25%. The organic matter content of soil cultivated Italian ryegrass after distributedmagnesium lime was 1.37% and that of soil cultivated Italian ryegrass after distributed gypsum was 1.30%. It was highcontent comparing to that of soil distributed lime materials only. The organic matter content of soil cultivated rye after distributed gypsum was 1.77%. 2. The phosphate content of soil cutivated Italian ryegrass was 332ppm. The phosphate content ofsoil cultivated Italian ryegrass after distributed magnesium lime was 34Oppm and that of soil cultivated Italian ryegrass afterdistributed gypsum was 31 2ppm. The phosphate content of soil cultivated rye only was 386ppm. The phosphate content ofsoil cultivated rye after distributed gypsum was 41 8ppm. This phosphate content was lower than that of soil distributed limematerials only. 3. The phytotoxin content of soil cultivated Italian ryegrass after distributed magnesium lime was decreased to17.7% and that of soil cultivated Italian ryegrass after distributed gypsum was decreased to 25.3%. The phytotoxin content ofsoil cultivated rye after distributed magnesium lime was decreased to 12.0% and that of soil cultivated rye after distributedgypsum was decreased to 12.8% comparing to the phytotoxin content of soil distributed lime materials only. Italian ryegrasswas effective to decrease phytotoxin among the forage crops and gypsum was effective among the lime materials. 4. Abacterial wilt and a late spot of peanut which were known as, main reason of continuous cropping failure were surveyed.lnccidence of a bacterial wilt was 3.4% in the plot cultivated Italian ryegrass only and that was 2.9% in the plot cultivated ryeonly. lnccidence of a bacterial wilt was 2.5% in the plot cultivated Italian ryegrass after distributed magnesium lime and thatwas 2.3% in the plot cultivated rye after distributed gypsum. Inccidence plot cultivated forage crops was lower than that of plotdistributed lime materials. 5. Inccidence of a late spot was high in the plot cultivated forage crops ony, but it was low in the plotcultivated forage crops after distributed lime materials comparing to that of the control plot. 6. The growth and yield of peanutwere bad in the plot cultivated forage crops only comparing to the control plot distributed lime materials only. These resultswere same in the plot cultivated rye after distributed lime materials, but the growth and yield were grown up in the plotcultured Italian ryegrass after distributed lime materials.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Monitoring of Pesticide Residues Concerned in Stream Water (전국 하천수 중 잔류우려 농약 실태조사)

  • Hwang, In-Seong;Oh, Yee-Jin;Kwon, Hye-Young;Ro, Jin-Ho;Kim, Dan-Bi;Moon, Byeong-Chul;Oh, Min-Seok;Noh, Hyun-Ho;Park, Sang-Won;Choi, Geun-Hyoung;Ryu, Song-Hee;Kim, Byung-Seok;Oh, Kyeong-Seok;Lim, Chi-Hwan;Lee, Hyo-Sub
    • Korean Journal of Environmental Agriculture
    • /
    • v.38 no.3
    • /
    • pp.173-184
    • /
    • 2019
  • BACKGROUND: This study was carried out to investigate pesticide residues from fifty streams in Korea. Water samples were collected at two times. Thee first sampling was performed from april to may, which was the season for start of pesticide application and the second sampling event was from august to september, which was a period for spraying pesticides multiple times. METHODS AND RESULTS: The 136 pesticide residues were analyzed by LC-MS/MS and GC/ECD. As a result, eleven of the pesticide residues were detected at the first sampling. Twenty eight of the pesticide residues were detected at the second sampling. Seven pesticides were frequently detected from more than 10 water samples. Ecological risk assessment (ERA) was carried out by using residual and toxicological data. Four scenarios were applied for the ERA. Scenario 1 and 2 were performed using LC50 values and mean and maximum concentrations. Scenarios 3 and 4 were conducted by NOEC values and mean and maximum concentrations. CONCLUSION: Frequently detected pesticide residues tended to coincide with the period of preventing pathogen and pest at paddy rice. As a result of ERA, five pesticides (butachlor, carbendazim, carbofuran, chlorantranilprole, and oxadiazon) were assessed to be risks at scenario 4. However, only oxadiazon was assessed to be a risk at scenario 3 for the first sampling. Oxadiazon was not assessed to be a risk at the second sampling. It seems to be temporary phenomenon at the first sampling, because usage of herbicides such as oxadiazon increased from April to march for preventing weeds at paddy fields. However, this study suggested that five pesticides which were assessed to be risks need to be monitored continuously for the residues.