DOI QR코드

DOI QR Code

Potential of Bidirectional Long Short-Term Memory Networks for Crop Classification with Multitemporal Remote Sensing Images

  • Kwak, Geun-Ho (PhD Candidate, Department of Geoinformatic Engineering, Inha University) ;
  • Park, Chan-Won (Senior Researcher, National Institute of Agricultural Sciences, Rural Development Administration) ;
  • Ahn, Ho-Yong (Researcher, National Institute of Agricultural Sciences, Rural Development Administration) ;
  • Na, Sang-Il (Researcher, National Institute of Agricultural Sciences, Rural Development Administration) ;
  • Lee, Kyung-Do (Researcher, National Institute of Agricultural Sciences, Rural Development Administration) ;
  • Park, No-Wook (Professor, Department of Geoinformatic Engineering, Inha University)
  • Received : 2020.08.06
  • Accepted : 2020.08.14
  • Published : 2020.08.31

Abstract

This study investigates the potential of bidirectional long short-term memory (Bi-LSTM) for efficient modeling of temporal information in crop classification using multitemporal remote sensing images. Unlike unidirectional LSTM models that consider only either forward or backward states, Bi-LSTM could account for temporal dependency of time-series images in both forward and backward directions. This property of Bi-LSTM can be effectively applied to crop classification when it is difficult to obtain full time-series images covering the entire growth cycle of crops. The classification performance of the Bi-LSTM is compared with that of two unidirectional LSTM architectures (forward and backward) with respect to different input image combinations via a case study of crop classification in Anbadegi, Korea. When full time-series images were used as inputs for classification, the Bi-LSTM outperformed the other unidirectional LSTM architectures; however, the difference in classification accuracy from unidirectional LSTM was not substantial. On the contrary, when using multitemporal images that did not include useful information for the discrimination of crops, the Bi-LSTM could compensate for the information deficiency by including temporal information from both forward and backward states, thereby achieving the best classification accuracy, compared with the unidirectional LSTM. These case study results indicate the efficiency of the Bi-LSTM for crop classification, particularly when limited input images are available.

Keywords

1. Introduction

Crop maps are regarded as one of the most important sources of information for agricultural environment management (Ban et al., 2017; Lee et al., 2017; Na et al., 2017). Remote sensing images have been widely used for producing crop maps owing to their ability to periodically provide regional information (Na et al., 2018;Yoo et al., 2017). However, crop maps generated by classifying remote sensing imagesinevitably contain errors compared with field surveys. Thus, reliable crop mapping requires improvement in the classification accuracy.

As each crop type has its own growth cycle stages, including seeding, flowering, and harvesting, temporal information should be properly modeled to distinguish the different crop types while classifying the remote sensing images. To account for different growth nature of crops, multitemporal remote sensing images have been often used for crop classification (Rußwurm and Körner, 2018; Sonobe et al., 2017). From a methodological viewpoint, conventional machine learning models including random forests and support vector machine have been widely applied in crop classification using multitemporal remote sensing images (Kim et al., 2017; Kwak and Park, 2019). Recently, deep learning models such as convolutional neural networks (CNNs) have also been applied to crop classification and they have achieved superior classification performance compared to conventional machine learning models (Guidici and Clark, 2017; Zhong et al., 2019). However, CNN-based models applied to crop classification have not fully accounted for the temporal dependency of multitemporal images that are usefulfor differentiating different types of crops (Ienco et al., 2017). Thus, to fully utilize the usefulness of temporal dependency in crop classification, it is necessary to apply advanced classification models that can properly model the usefulness of temporal dependency.

The deep learning models that are adequate to model the complex temporal dependency or correlation for crop classification include recurrent neural network (RNN) and long short-term memory (LSTM) network. The LSTM is an advanced RNN architecture that can overcome the limitations of the regular RNN, such aslong-term dependency. By modeling sequential information in a recursive way, RNN-based models have been proven effective in several fields including speech recognition and natural language processing (Liu and Guo, 2019; Yang et al., 2019), and have been successfully applied to the classification of remote sensing images (Mou et al., 2017; Sun et al., 2019). An improved version of RNN is a bidirectional model that can overcome the limitations of not using future time step information during the learning process (Schuster andPaliwal, 1997). The bidirectional LSTM(Bi-LSTM) is known to be more effective than unidirectional LSTM for crop classification using multitemporal remote sensing images owing to its ability to consider both forward and backward temporalstate information (Sun et al., 2019).

From a data availability viewpoint, cloud-free optical images cannot be always obtained in summer, particularly during the rainy season, which makes it difficult to construct a time-series dataset that fully accounts for the growth cycle stage of crops of interest. Classification accuracy become poor when cloud free images are unavailable in summer because the vegetation vitality of most summer crops in Korea peaks from July to August. Thus, satisfactory classification performance should be achieved with images acquired at certain times. Bi-LSTM can mitigate such information deficiency from limited numbers of images by increasing the available information content. However, the classification performance of Bi-LSTM has not been fully evaluated when different combinations of multitemporal remote sensing images are applied for crop classification.

In this study, the potential of Bi-LSTM for crop classification with multitemporal remote sensing images is investigated with respect to combinations of different time-series dataset where limited input images are available. Two unidirectional LSTM models, including a forward one and a backward one, are applied and compared with Bi-LSTM. The effectiveness of Bi-LSTM in the case of limited input images is illustrated via a case study of crop classification in Anbadegi, Korea, with multitemporal unmanned aerial vehicle (UAV) images.

2. Study Area and Data

The study area is a part of Anbandegi, which is one of three major cultivation areas of highland Kimchi cabbage in Korea (Fig. 1). As Kimchi cabbage issensitive to weather conditions and cannot be grown at high temperatures, it is cultivated mainly on the highlands of Kangwon Province, Korea in summer (Kwak and Park, 2019). In addition to highland Kimchi cabbage, cabbage and potato are grown in the study area, and several fallow parcels are also present in the region.

OGCSBN_2020_v36n4_515_f0001.png 이미지

Fig. 1. Location map of the study area with UAV imagery acquired on August 15, 2018..

Multitemporal UAV images are used for crop classification because the study area consists of smallscale parcels, and the data acquisition of time-series UAV images covering the entire growth cycles of crops is much easier than that of satellite or aerial photo images. Three-channel UAV images including blue, green, and red spectral channels were acquired in 2018 using a fixed-wing unmanned aerial system (eBee, senseFly, Switzerland) with a Canon IXUS/ELPH camera. These three visible spectral channels were used for crop classification because the blue channel provides sufficient information to discriminate the different crop types without near-infrared channel (Kwak et al., 2019). For the UAV image acquisition, the forward and side overlaps were set to be 75% and 60%, respectively. Radiometrically corrected orthomosaic images were generated using Pix4Dmapper software (Pix4D, Switzerland), taking into account exterior orientation parameters and incoming sunlight information. Finally, nine multitemporal UAV images with a spatial resolution of 50 cm are used asinputsfor classification experiments (Table 1).

Table 1. List of UAV images acquired in the study area​​​​​​

OGCSBN_2020_v36n4_515_t0001.png 이미지

Four classes, including three types of crops and fallow, are considered for supervised classification. A ground truth map based on field surveys was used to select training and test datasets for model training and evaluation, respectively. To ensure spatial independence between training and test datasets, all parcels in the study area were first divided into spatially independent training and reference groups and the proportions of the two groups were 25% and 75%, respectively. Then, training pixels were randomly selected within each training parcel by considering the proportion of each crop parcel in the study area (Table 2). A total of 5,000 training samples were selected based on our previous study (Kwak et al., 2019). For the evaluation of the classification results, all the pixels within the test parcels were used to assess the classification performance of the different LSTM models.

Table 2. Number of training and test pixels for each class

OGCSBN_2020_v36n4_515_t0002.png 이미지

3. Classification Methods

1) Long Short-Term Memory Model

RNN is an artificial neural network designed for sequential data processing. The key feature of the RNN is to determine the output at a current time step using both the output at a previous time step and the input at a current time step, which is particularly efficient to explicitly account for the sequential data dependency.

The most well-known type of RNN family is the LSTM, which was designed to mitigate the limitation of the regular RNN, where the long-term dependencies cannot be learned because of the vanishing and exploding gradients (Hochreiter and Schmidhuber, 1997). The LSTM unit has important internal functions, including states and gates (Fig. 2). The forget gate (ft) first decides whether to remember or forget the information at a previous time step and the input gate (it) then regulates how much of the information at a current time step needs to be maintained. The current cell state is determined by updating the previous cell state using the outputs of the forget and input gates. The output gate (ot) controls which part of the current cell state determines the hidden state. Finally, both the cell and hidden states controlled through three different gates are forwarded to the next time step. The computation of the outputs of gates and states can be respectively formulated as:

OGCSBN_2020_v36n4_515_f0002.png 이미지

Fig. 2. Basic structure of the LSTM unit (modified from Rußwurm and Körner (2018)).

it = σ(Wixxt + Wihht-1 + bi)       (1)

ft = σ(Wfxxt + Wfhht-1 + bf)       (2)

ct = ft ⨀ ct-1 + it ⨀ tanh (Wcxxt + Wchht-1 + bc)       (3)

ot = σ(Woxxt + Wohht-1 + bo)       (4)

ht = ot ⨀ tanh (ct)       (5)

where W*x and W*h are the weight of a hidden-input gate matrix and an input-output gate matrix, respectively. b* is a bias variable and xt is a model input at time t. ⨀ denotes an elementwise Hadamard product operator. σ and tanh are the sigmoid and hyperbolic tangent functions, respectively.

The LSTM described above is a network that can solve forward sequence learning problems. However, information on the growth cycle of crops is temporally interrelated from sowing to harvesting in both forward and backward directions(Kwak et al., 2019). Therefore, Bi-LSTM that models sequential data using information from both previous and future time steps could be a promising model for crop classification with multitemporal remote sensing images.

Bi-LSTM combines two independent LSTM architectures where forward and backward information is learned (Fig. 3). For example, the LSTM unit at the second time step outputs the hidden state by concatenating information learned both in the forward direction from the first time step to the second time step and in the backward direction from the last time step to the second time step.

OGCSBN_2020_v36n4_515_f0003.png 이미지

Fig. 3. Basic structure of the Bi-LSTM model.

2) LSTM Model Construction

The optimal structures of three different LSTM models, the forward LSTM(F-LSTM), backwardLSTM (B-LSTM), and Bi-LSTM are constructed for crop classification in the study area based on preliminary experiments (Table 3). The model structures including a stack of three LSTM layers are employed because LSTM with a multi-layer structure can learn non-linear temporal dependencies (Ienco et al., 2017). The LSTM model has 64 channels and includes dropout with a rate of 40% for each layer. Dropout is a regularization method that randomly removes some neurons to avoid dependency on specific neurons while training a model (Srivastava et al., 2014). Finally, the fully connected layer with a softmax activation function is stacked on the last LSTM unit to perform the classification. The outputs of the fully connected layer are the probability values summing to one, and the class with a maximum probability value is assigned as a classification result at each pixel. For the analysis oftime-series classification results, a many-to-many mode, which returns the output at a final time step, is adopted to produce the classification result at each time step.

Table 3. Parameters of LSTM models applied in this study: t and c denote the numbers of inputs used and classes, respectively

OGCSBN_2020_v36n4_515_t0003.png 이미지

Total trainable parameters of unidirectional LSTM: 83,972

Total trainable parameters of bidirectional LSTM: 233,476

To train the LSTM models, categorical cross entropy is applied as a loss function for training the LSTM architectures, and the Adam optimizer with a learning rate of 1×10-4 is also adopted for the optimization of the loss function. The LSTM network is trained over 50 epochs and the training process is stopped early to prevent both overfitting and underfitting of the model when the training loss no longer decreases. The model at the early stopped epoch is defined as the optimal model. All the classification experiments using LSTM were implemented using the Keras Python library (Chollet, 2015).

As any deep learning model has stochastic characteristics, classification using the LSTM models is implemented five times. Averages of five overall accuracy and class-wise accuracy values are used for quantitative measures of the classification performance. One classification result with the highest overall accuracy is also used for visual comparison with the test data. Non-crop areas are finally masked out to generate crop maps for the study area.

3) Experiment Design

As the focus of this case study is comparing the classification performance of Bi-LSTM with that of unidirectional LSTM, different combinations of input images are first generated and then used for a comparison study. Prior to the classification experiments, the growth cycle stages of cropsin the study area based on field surveys was first analyzed to select UAV images useful for crop discrimination (Fig. 4). As shown in Fig. 4, the sowing times of both potato and cabbage are similar (early June), but potato is harvested fasterthan cabbage (mid-August). The highland Kimchi cabbage is sowed later than potato and cabbage and harvested in mid-September, similar to cabbage. Two images acquired on June 28 and July 16 provide useful information to identify highland Kimchi cabbage from cabbage because the difference in vegetation vitality between the two crops is large in the two images. In August, highland Kimchi cabbage and cabbage has the highest vegetation vitality, and the difference in vegetation vitality between the two crops and the potato is high. Thus, the images obtained in August are useful for identifying potato. However, it might be difficult to discern crops that have reached the peak of vegetation vitality from fallow with consistently high vegetation vitality. Therefore, it is necessary to collect a time-series image set that can account for the phenological characteristics of all the crops in the study area. As described in the Introduction, however, it is often difficult to construct a full set of time-series images. From an operational viewpoint, it is necessary to select an optimal LSTM architecture that can achieve satisfactory classification performance even when the acquisition of a full or optimalset of time-seriesimages is difficult.

OGCSBN_2020_v36n4_515_f0004.png 이미지

Fig. 4. Vegetation vitality and growth cycles of crops in the study area based on field surveys.

Three representative cases for input image combinations are defined by considering the conditions for UAV image acquisition and the growth cycles of crop classesin the study area: (1) Nine time-series UAV images covering the entire growth cycle of the major crops (C1), (2) Three multitemporal UAV images with information for clear distinctions of major crops (C2), and (3) Three multitemporal UAV images, including images acquired at the seeding stage of crops (C3). Case C3 represents the case where the distinction between cropsis unclear. By considering three different LSTM model architectures including the F-LSTM, BLSTM, and Bi-LSTM, a total of nine different cases are tested for comparison purposes (Table 4).

Table 4. Nine combination cases with different input UAV images and LSTM architectures. The number in parenthesis indicates the data number in Table 1

OGCSBN_2020_v36n4_515_t0004.png 이미지

4. Results and Discussion

Fig. 5 presents the average overall accuracy values for nine combination cases of different input images and architectures of the LSTM. Regardless of different combinations of input images, Bi-LSTM outperformed the other unidirectional LST Marchitectures. Particularly, Bi-LSTM yielded the smallest variation in overall accuracy for five repetitions, thereby indicating the most stable LSTM architecture in terms of classification accuracy. When comparing the classification results with respect to different combinations of input images, the LSTM with C1 exhibited the best classification accuracy, regardless of the architecture of the LSTM, but the difference in overall accuracy between the different architectures of LSTM was not significant. This result is attributed to the rich temporal information from a full time-series dataset covering the entire growth cycle of major crops. In the classification for C2, the accuracy of Bi-LSTM outperformed the other LSTM models by more than 3.4%p. Compared with C1, the classification performance of the Bi-LSTM with C2 was similar to that of the Bi-LSTM with C1 (96.8% vs. 97.6%), whereas the classification performance of both F-LSTM and B-LSTM decreased. A significant decrease in overall accuracy was observed when C3 was used for classification. However, Bi-LSTM exhibited the highest overall accuracy, with an improvement of approximately 6%p over the F-LSTM. The F-LSTM showed more variations in overall accuracy among the five repetitions, indicating the instability of the LSTM architecture. The highest classification accuracy of Bi-LSTM demonstrates its superiority over the unidirectional LSTM even when images cannot be acquired during the growing period of major crops.

OGCSBN_2020_v36n4_515_f0005.png 이미지

Fig. 5. Average overall accuracy of classification results for nine combination cases with different input images and architectures of LSTM. Vertical lines represent one standard deviation of five overall accuracy values..

For the interpretations of the superiority of Bi-LSTM over the two unidirectional LSTM architectures, temporal variations of classification results and classification accuracy for C1 were further analyzed by generating nine time-series crop maps (Fig. 6 and Fig. 7). The growing cycle states of cropsin the study area in Fig. 4 were also used to interpret the time-series classification results. As shown in Fig. 6 and Fig. 7(a), the classification results on the first date of F-LSTM showed distinctive classification patterns, compared with that of the B-LSTM. As the first date for B-LSTM is October 13, all the crops in the study area were already harvested, thereby yielding many misclassified pixels and the poorest classification accuracy. The classification accuracies of both the unidirectional architectures were improved significantly from the third date(i.e., June 28 for F-LSTM and September 4 for B-LSTM) by including temporal information sufficient to discriminate four classes. The information contained in each date image is sequentially stacked in the unidirectional LSTM, and all the stacked information is reflected in the classification result. In contrast, as shown in Fig. 7(a), the Bi-LSTM yielded the highest classification accuracy for each date by using both forward and backward state information learned at each date during the classification.

OGCSBN_2020_v36n4_515_f0006.png 이미지

Fig. 6. Time-series classification maps of both F-LSTM and B-LSTM with a many-to-many method for C1. The number below each classification map indicates the data number in Table 1. The data number of B-LSTM is the reverse of the actual data number.

OGCSBN_2020_v36n4_515_f0007.png 이미지

Fig. 7. Variations in classification accuracy with respect to each date: (a) overall accuracy, (b) class-wise accuracy of F-LSTM, and (c) class-wise accuracy of B-LSTM. The data number of B-LSTM is the reverse of the actual data number.

The class-wise accuracy presented in Fig. 7(b) and Fig. 7(c) confirms the ability of LSTM to efficiently model the temporal correlation information. The first date in F-LSTM in Fig. 7(b) is a stage where sowing was begun in some crop parcels, but most parcels included bare soils. Consequently, highland Kimchi cabbage was misclassified as cabbage due to mixed crops in the crop parcels with bare soils, thereby yielding low classification accuracy. Moreover, on the first date in B-LSTM in Fig. 7(c) where all the crops have been harvested, crop parcels, except for fallow, were misclassified as highland Kimchi cabbage, leading to the low classification accuracy of cabbage and potato. However, the classification accuracies of the two unidirectional LSTM architectures were improved substantially whenever temporal information after the first date was transferred to the next time step.

Fig. 8 presentsthe variationsin overall accuracy for C2 and C3. The highest accuracy and the smallest variation of accuracy of Bi-LSTM for both C2 and C3 demonstrate the superiority of Bi-LSTM over the unidirectional LSTM. As shown in Fig. 8(a), the classification accuracy of B-LSTM for C2, which used the September image where cabbage and potato were harvested as the first data, decreased significantly compared to the other LSTM architectures. In addition, the classification accuracy of F-LSTM for C3 was also low until the second date because only information from the sowing stage was used for the F-LSTM (Fig. 8(b)). When considering the variations in the classification accuracy of both F-LSTM and B-LSTM with respect to the input image combination, the classification accuracy of unidirectional LSTM depends heavily on the information content contained in an input image at the beginning date. That is, the higher the classification accuracy on the first date, the higher the classification accuracy of the final classification result. Particularly, this result is clearly shown for C3, where input images failed to provide sufficient information to discriminate the types of crops. However, the Bi-LSTM always exhibited better classification performance than the unidirectional LSTM from the beginning date because of its ability to fully utilize the information from both previous and future time steps in all directions. These results confirm the effectiveness of Bi-LSTM in crop classification when limited input images are be used for classification.

OGCSBN_2020_v36n4_515_f0008.png 이미지

Fig. 8. Variations in overall accuracy for (a) C2 and (b) C3. The data number of B-LSTM is the reverse of the actual data number

5. Conclusion

In this study, the potential of Bi-LSTM for crop classification with multitemporal remote sensing images was investigated. The benefits of using temporal information in forward and backward states by Bi-LSTM were analyzed particularly for the case where either a full set or an optimal set of time-series images covering almost the entire growth cycle of crops is not available, which is often encountered for crop classification using remote sensing images. A case study of crop classification in highland Kimchi cabbage cultivation areas with time-series UAV images demonstrated the superiority of Bi-LSTM over the conventional unidirectional LSTM, regardless ofinput image combinations. The classification accuracy of Bi-LSTM was greatly improved in the case where some images acquired at a time when discrimination between crops was not apparent were used asinputsfor crop classification. The classification performance of unidirectional LSTM was significantly affected by the classification results of the beginning date. In contrast, Bi-LSTM could mitigate the dependency on initial classification results because ofits architecture that can consider temporal information in both forward and backward states. However, these results were obtained from a case study with specific crop types. Thus, more experiments should be conducted on other crop areas where the growth cycles are different from those of crops in the study area to comprehensively verify the efficiency of Bi-LSTM for crop classification with limited input images.

Acknowledgements

This work was carried out with the support of “Cooperative Research Program for Agriculture Science & Technology Development (Project No. PJ01350004)” Rural Development Administration, Republic of Korea.

References

  1. Ban, H.-Y., K. S. Kim, N.-W. Park, and B.-W. Lee, 2017. Using MODIS data to predict regional corn yields, Remote Sensing, 9(1): 16. https://doi.org/10.3390/rs9010016
  2. Chollet, F., 2015. Keras, https://github.com/fchollet/keras, Accessed on Jul. 13, 2020.
  3. Guidici, D. and M. L. Clark, 2017. One-dimensional convolutional neural network land-cover classification of multi-seasonal hyperspectral imagery in the San Francisco Bay area, California, Remote Sensing, 9(6): 629. https://doi.org/10.3390/rs9060629
  4. Hochreiter, S. and J. Schmidhuber, 1997. Long shortterm memory, Neural Computation, 9(8): 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  5. Ienco, D., R. Gaetano, C. Dupaquier, and P. Maurel, 2017. Land cover classification via multitemporal spatial data by deep recurrent neural networks, IEEE Geoscience and Remote Sensing Letters, 14(10): 1685-1689. https://doi.org/10.1109/LGRS.2017.2728698
  6. Kim, Y., N.-W. Park, and K.-D. Lee, 2017. Selflearning based land-cover classification using sequential class patterns from past land-cover maps, Remote Sensing, 9(9): 921. https://doi.org/10.3390/rs9090921
  7. Kwak, G.-H. and N.-W. Park, 2019. Impact of texture information on crop classification with machine learning and UAV images, Applied Sciences, 9(4): 643. https://doi.org/10.3390/app9040643
  8. Kwak, G.-H., M.-G. Park, C.-W. Park, K.-D. Lee, S.-I. Na, H.-Y. Ahn, and N.-W. Park, 2019. Combining 2D CNN and bidirectional LSTM to consider spatio-temporal features in crop classification, Korean Journal of Remote Sensing, 35(5): 681-692 (in Korean with English abstract). https://doi.org/10.7780/kjrs.2019.35.5.1.5
  9. Lee, J., B. Seo, and S. Kang, 2017. Development of a biophysical rice yield model using allweather climate data, Korean Journal of Remote Sensing, 33(5-2): 721-732 (in Korean with English abstract). https://doi.org/10.7780/kjrs.2017.33.5.2.11
  10. Liu, G. and J. Guo, 2019. Bidirectional LSTM with attention mechanism and convolutional layer for text classification, Neurocomputing, 337: 325-338. https://doi.org/10.1016/j.neucom.2019.01.078
  11. Mou, L., P. Ghamisi, and X. X. Zhu, 2017. Deep recurrent neural networks for hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing, 55(7): 3639-3655. https://doi.org/10.1109/TGRS.2016.2636241
  12. Na, S.-I., C.-W. Park, K.-H. So, J.-M. Park, and K.-D. Lee, 2017. Satellite imagery based winter crop classification mapping using hierarchical Classification, Korean Journal of Remote Sensing, 33(5): 677-687 (in Korean with English abstract). https://doi.org/10.7780/kjrs.2017.33.5.2.7
  13. Na, S.-I., C.-W. Park, K.-H. So, H.-Y. Ahn, and K.-D. Lee, 2018. Application method of unmanned aerial vehicle for crop monitoring in Korea, Korean Journal of Remote Sensing, 34(5-2): 829-846 (in Korean with English abstract). https://doi.org/10.7780/KJRS.2018.34.5.10
  14. Ruswurm, M. and M. Korner, 2018. Multi-temporal land cover classification with sequential recurrent encoders, ISPRS International Journal of Geo-Information, 7(4): 129. https://doi.org/10.3390/ijgi7040129
  15. Schuster, M. and K. K. Paliwal, 1997. Bidirectional recurrent neural networks, IEEE Transactions on Signal Processing, 45(11): 2673-2681. https://doi.org/10.1109/78.650093
  16. Sonobe, R., Y. Yamaya, H. Tani, X. Wang, N. Kobayashi, and K.-I. Mochizuki, 2017. Mapping crop cover using multi-temporal Landsat 8 OLI imagery, International Journal of Remote Sensing, 38(15): 4348-4361. https://doi.org/10.1080/01431161.2017.1323286
  17. Srivastava, N., G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, 2014. Dropout: A simple way to prevent neural networks from overfitting, Journal of Machine Learning Research, 15(1): 1929-1958.
  18. Sun, Z., L. Di, and H. Fang, 2019. Using long shortterm memory recurrent neural network in land cover classification on Landsat and Cropland data layer time series, International Journal of Remote Sensing, 40(2): 593-614. https://doi.org/10.1080/01431161.2018.1516313
  19. Yang, L., Y. Li, J. Wang, and Z. Tang, 2019. Post text processing of Chinese speech recognition based on bidirectional LSTM networks and CRF, Electronics, 8(11): 1248. https://doi.org/10.3390/electronics8111248
  20. Yoo, H. Y., K.-D. Lee, S.-I. Na, C.-W. Park, and N.-W. Park, 2017. Field crop classification using multi-temporal high-resolution satellite imagery: A case study on garlic/onion field, Korean Journal of Remote Sensing, 33(5-2): 621-630 (in Korean with English abstract). https://doi.org/10.7780/kjrs.2017.33.5.2.2
  21. Zhong, L., L. Hu, and H. Zhou, 2019. Deep learning based multi-temporal crop classification, Remote Sensing of Environment, 221: 430-443. https://doi.org/10.1016/j.rse.2018.11.032

Cited by

  1. RapidEye 위성영상을 이용한 작물재배지역 추정을 위한 FC-DenseNet의 활용성 평가 vol.36, pp.5, 2020, https://doi.org/10.7780/kjrs.2020.36.5.1.14
  2. Two-stage Deep Learning Model with LSTM-based Autoencoder and CNN for Crop Classification Using Multi-temporal Remote Sensing Images vol.37, pp.4, 2021, https://doi.org/10.7780/kjrs.2021.37.4.4
  3. A Novel Cryptocurrency Price Prediction Model Using GRU, LSTM and bi-LSTM Machine Learning Algorithms vol.2, pp.4, 2021, https://doi.org/10.3390/ai2040030