• Title/Summary/Keyword: 최적해

Search Result 34,360, Processing Time 0.064 seconds

Genetic Diversity of Korean Native Chicken Populations in DAD-IS Database Using 25 Microsatellite Markers (초위성체 마커를 활용한 가축다양성정보시스템(DAD-IS) 등재 재래닭 집단의 유전적 다양성 분석)

  • Roh, Hee-Jong;Kim, Kwan-Woo;Lee, Jinwook;Jeon, Dayeon;Kim, Seung-Chang;Ko, Yeoung-Gyu;Mun, Seong-Sil;Lee, Hyun-Jung;Lee, Jun-Heon;Oh, Dong-Yep;Byeon, Jae-Hyun;Cho, Chang-Yeon
    • Korean Journal of Poultry Science
    • /
    • v.46 no.2
    • /
    • pp.65-75
    • /
    • 2019
  • A number of Korean native chicken(KNC) populations were registered in FAO (Food and Agriculture Organization) DAD-IS (Domestic Animal Diversity Information Systems, http://www.fao.org/dad-is). But there is a lack of scientific basis to prove that they are unique population of Korea. For this reason, this study was conducted to prove KNC's uniqueness using 25 Microsatellite markers. A total of 548 chickens from 11 KNC populations (KNG, KNB, KNR, KNW, KNY, KNO, HIC, HYD, HBC, JJC, LTC) and 7 introduced populations (ARA: Araucana, RRC and RRD: Rhode Island Red C and D, LGF and LGK: White Leghorn F and K, COS and COH: Cornish brown and Cornish black) were used. Allele size per locus was decided using GeneMapper Software (v 5.0). A total of 195 alleles were observed and the range was 3 to 14 per locus. The MNA, $H_{\exp}$, $H_{obs}$, PIC value within population were the highest in KNY (4.60, 0.627, 0.648, 0.563 respectively) and the lowest in HYD (1.84, 0.297, 0.286, 0.236 respectively). The results of genetic uniformity analysis suggested 15 cluster (${\Delta}K=66.22$). Excluding JJC, the others were grouped in certain cluster with high genetic uniformity. JJC was not grouped in certain cluster but grouped in cluster 2 (44.3%), cluster 3 (17.7%) and cluster8 (19.1%). As a results of this study, we can secure a scientific basis about KNC's uniqueness and these results can be use to basic data for the genetic evaluation and management of KNC breeds.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Facile [11C]PIB Synthesis Using an On-cartridge Methylation and Purification Showed Higher Specific Activity than Conventional Method Using Loop and High Performance Liquid Chromatography Purification (Loop와 HPLC Purification 방법보다 더 높은 비방사능을 보여주는 카트리지 Methylation과 Purification을 이용한 손쉬운 [ 11C]PIB 합성)

  • Lee, Yong-Seok;Cho, Yong-Hyun;Lee, Hong-Jae;Lee, Yun-Sang;Jeong, Jae Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.67-73
    • /
    • 2018
  • $[^{11}C]PIB$ synthesis has been performed by a loop-methylation and HPLC purification in our lab. However, this method is time-consuming and requires complicated systems. Thus, we developed an on-cartridge method which simplified the synthetic procedure and reduced time greatly by removing HPLC purification step. We compared 6 different cartridges and evaluated the $[^{11}C]PIB$ production yields and specific activities. $[^{11}C]MeOTf$ was synthesized by using TRACERlab FXC Pro and was transferred into the cartridge by blowing with helium gas for 3 min. To remove byproducts and impurities, cartridges were washed out by 20 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ solution (pH 5.1) and 10 mL of distilled water. And then, $[^{11}C]PIB$ was eluted by 5 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ into the collecting vial containing 10 mL saline. Among the 6 cartridges, only tC18 environmental cartridge could remove impurities and byproducts from $[^{11}C]PIB$ completely and showed higher specific activity than traditional HPLC purification method. This method took only 8 ~ 9 min from methylation to formulation. For the tC18 environmental cartridge and conventional HPLC loop methods, the radiochemical yields were $12.3{\pm}2.2%$ and $13.9{\pm}4.4%$, respectively, and the molar activities were $420.6{\pm}20.4GBq/{\mu}mol$ (n=3) and $78.7{\pm}39.7GBq/{\mu}mol$ (n=41), respectively. We successfully developed a facile on-cartridge methylation method for $[^{11}C]PIB$ synthesis which enabled the procedure more simple and rapid, and showed higher molar radio-activity than HPLC purification method.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Feed Value of the Different Plant Parts of Main Forage Rice Varieties (사료용 벼 주요 품종의 수확부위 별 사료가치)

  • Ahn, Eok-Keun;Won, Yong-Jae;Kang, Kyung-Ho;Park, Hyang-Mi;Jung, Kuk-Hyun;Hyun, Ung-Jo;Lee, Yoon-Sung
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.67 no.1
    • /
    • pp.1-8
    • /
    • 2022
  • In order to manufacture feed suitable for consumer use and provide feed value information, we analyzed the feed components of the four main forage rice varieties by plant parts harvested 30 days after heading. The contents of the six feed ingredients were significantly different (p<0.05) among harvested parts. In the panicle, the crude protein (CP) (6.97%) and lignin (3.11%) were the highest, while the crude ash (CA) and neutral detergent fiber (NDF) contents were significantly lower, resulting in a total digestible nutrient (TDN) content of 77.29%, which is higher than that of the stem (64.82%) and leaf blade and sheath (LBS) (63.57%) (p<0.05). In contrast, the content of crude fat (CF) did not differ significantly among parts (p<0.05). In panicles from 'Jonong', 'Nokyang' and 'Yeongwoo', the TDN content of each cultivar was 78.48-79.07%, with no significant difference among the varieties. In 'Mogwoo' (Mw), the CP content was 8.70%, which was much higher than that of other varieties (p<0.05). In particular, the Mw TDN content was slightly lower in the panicle (72.95%) but higher in the stem (75.37%) and LBS (66.49%) than in the other varieties. The CA, NDF, acid detergent fiber (ADF), and lignin contents were also very low compared to other varieties; therefore, the feed value of the stem and LBS was excellent. In addition, the total dry matter weight (DMW) was 123 g per hill, which was much higher than 82-105 g per hill for other varieties. The distribution of DMW by part was LBS (56.9 g), stem (36.8 g), and panicle (29.3 g), and because the parts, except the panicles, were much higher than the 43-57% of other varieties (grain straw ratio: 76%), rice straw is advantageous in terms of quantity and feed value when used as forage on farms. The relative feed value (RFV) of the four cultivars ranged from 86.79-403.74 across all parts, and hay of grade 3 or higher with an RFV of 100 or more increased with delayed heading in both stems and LBS. This is due to the accumulation of starch into grains during ripening, which supports the observation that the RFV of the early flowering 'Jonong' and 'Nokyang' panicles increased.

The Innovation Ecosystem and Implications of the Netherlands. (네덜란드의 혁신클러스터정책과 시사점)

  • Kim, Young-woo
    • Journal of Venture Innovation
    • /
    • v.5 no.1
    • /
    • pp.107-127
    • /
    • 2022
  • Global challenges such as the corona pandemic, climate change and the war-on-tech ensure that the demand who the technologies of the future develops and monitors prominently for will be on the agenda. Development of, and applications in, agrifood, biotech, high-tech, medtech, quantum, AI and photonics are the basis of the future earning capacity of the Netherlands and contribute to solving societal challenges, close to home and worldwide. To be like the Netherlands and Europe a strategic position in the to obtain knowledge and innovation chain, and with it our autonomy in relation to from China and the United States insurance, clear choices are needed. Brainport Eindhoven: Building on Philips' knowledge base, there is create an innovative ecosystem where more than 7,000 companies in the High-tech Systems & Materials (HTSM) collaborate on new technologies, future earning potential and international value chains. Nearly 20,000 private R&D employees work in 5 regional high-end campuses and for companies such as ASML, NXP, DAF, Prodrive Technologies, Lightyear and many others. Brainport Eindhoven has a internationally leading position in the field of system engineering, semicon, micro and nanoelectronics, AI, integrated photonics and additive manufacturing. What is being developed in Brainport leads to the growth of the manufacturing industry far beyond the region thanks to chain cooperation between large companies and SMEs. South-Holland: The South Holland ecosystem includes companies as KPN, Shell, DSM and Janssen Pharmaceutical, large and innovative SMEs and leading educational and knowledge institutions that have more than Invest €3.3 billion in R&D. Bearing Cores are formed by the top campuses of Leiden and Delft, good for more than 40,000 innovative jobs, the port-industrial complex (logistics & energy), the manufacturing industry cluster on maritime and aerospace and the horticultural cluster in the Westland. South Holland trains thematically key technologies such as biotech, quantum technology and AI. Twente: The green, technological top region of Twente has a long tradition of collaboration in triple helix bandage. Technological innovations from Twente offer worldwide solutions for the large social issues. Work is in progress to key technologies such as AI, photonics, robotics and nanotechnology. New technology is applied in sectors such as medtech, the manufacturing industry, agriculture and circular value chains, such as textiles and construction. Being for Twente start-ups and SMEs of great importance to the jobs of tomorrow. Connect these companies technology from Twente with knowledge regions and OEMs, at home and abroad. Wageningen in FoodValley: Wageningen Campus is a global agri-food magnet for startups and corporates by the national accelerator StartLife and student incubator StartHub. FoodvalleyNL also connects with an ambitious 2030 programme, the versatile ecosystem regional, national and international - including through the WEF European food innovation hub. The campus offers guests and the 3,000 private R&D put in an interesting programming science, innovation and social dialogue around the challenges in agro production, food processing, biobased/circular, climate and biodiversity. The Netherlands succeeded in industrializing in logistics countries, but it is striving for sustainable growth by creating an innovative ecosystem through a regional industry-academic research model. In particular, the Brainport Cluster, centered on the high-tech industry, pursues regional innovation and is opening a new horizon for existing industry-academic models. Brainport is a state-of-the-art forward base that leads the innovation ecosystem of Dutch manufacturing. The history of ports in the Netherlands is transforming from a logistics-oriented port symbolized by Rotterdam into a "port of digital knowledge" centered on Brainport. On the basis of this, it can be seen that the industry-academic cluster model linking the central government's vision to create an innovative ecosystem and the specialized industry in the region serves as the biggest stepping stone. The Netherlands' innovation policy is expected to be more faithful to its role as Europe's "digital gateway" through regional development centered on the innovation cluster ecosystem and investment in job creation and new industries.

Potassium Physiology of Upland Crops (밭 작물(作物)의 가리(加里) 생리(生理))

  • Park, Hoon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.10 no.3
    • /
    • pp.103-134
    • /
    • 1977
  • The physiological and biochemical role of potassium for upland crops according to recent research reports and the nutritional status of potassium in Korea were reviewed. Since physical and chemical characteristics of potassium ion are different from those of sodium, potassium can not completely be replaced by sodium and replacement must be limited to minimum possible functional area. Specific roles of potassium seem to keep fine structure of biological membranes such as thylacoid membrane of chloroplast in the most efficient form and to be allosteric effector and conformation controller of various enzymes principally in carbohydrate and protein metabolism. Potassium is essential to improve the efficiency of phoro- and oxidative- phosphorylation and involve deeply in all energy required metabolisms especially synthesis of organic matter and their translocation. Potassium has many important, physiological functions such as maintenance of osmotic pressure and optimum hydration of cell colloids, consequently uptake and translocation of water resulting in higher water use efficiency and of better subcellular environment for various physiological and biochemical activities. Potassium affects uptake and translocation of mineral nutrients and quality of products. potassium itself in products may become a quality criteria due to potassium essentiality for human beings. Potassium uptake is greatly decreased by low temperature and controlled by unknown feed back mechanism of potassium in plants. Thus the luxury absorption should be reconsidered. Total potassium content of upland soil in Korea is about 3% but the exchangeable one is about 0.3 me/100g soil. All upland crops require much potassium probably due to freezing and cold weather and also due to wet damage and drought caused by uneven rainfall pattern. In barley, potassium should be high at just before freezing and just after thawing and move into grain from heading for higher yield. Use efficiency of potassium was 27% for barley and 58% in old uplands, 46% in newly opened hilly lands for soybean. Soybean plant showed potassium deficiency symptom in various fields especially in newly opened hilly lands. Potassium criteria for normal growth appear 2% $K_2O$ and 1.0 K/(Ca+Mg) (content ratio) at flower bud initiation stage for soybean. Potassium requirement in plant was high in carrot, egg plant, chinese cabbage, red pepper, raddish and tomato. Potassium content in leaves was significantly correlated with yield in chinese cabbage. Sweet potato. greatly absorbed potassium subsequently affected potassium nutrition of the following crop. In the case of potassium deficiency, root showed the greatest difference in potassium content from that of normal indicating that deficiency damages root first. Potatoes and corn showed much higher potassium content in comparison with calcium and magnesium. Forage crops from ranges showed relatively high potassium content which was significantly and positively correlated with nitrogen, phosphorus and calcium content. Percentage of orchards (apple, pear, peach, grape, and orange) insufficient in potassium ranged from 16 to 25. The leaves and soils from the good apple and pear orchards showed higher potassium content than those from the poor ones. Critical ratio of $K_2O/(CaO+MgO)$ in mulberry leaves to escape from winter death of branch tip was 0.95. In the multiple croping system, exchangeable potassium in soils after one crop was affected by the previous crops and potassium uptake seemed to be related with soil organic matter providing soil moisture and aeration. Thus, the long term and quantitative investigation of various forms of potassium including total one are needed in relation to soil, weather and croping system. Potassium uptake and efficiency may be increased by topdressing, deep placement, slow-releasing or granular fertilizer application with the consideration of rainfall pattern. In all researches for nutritional explanation including potassium of crop yield reasonable and practicable nutritional indices will most easily be obtained through multifactor analysis.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.