DOI QR코드

DOI QR Code

The Efficiency of Long Short-Term Memory (LSTM) in Phenology-Based Crop Classification

  • Ehsan Rahimi (Agricultural Science and Technology Institute, Andong National University) ;
  • Chuleui Jung (Agricultural Science and Technology Institute, Andong National University)
  • Received : 2024.01.24
  • Accepted : 2024.02.20
  • Published : 2024.02.28

Abstract

Crop classification plays a vitalrole in monitoring agricultural landscapes and enhancing food production. In this study, we explore the effectiveness of Long Short-Term Memory (LSTM) models for crop classification, focusing on distinguishing between apple and rice crops. The aim wasto overcome the challenges associatedwith finding phenology-based classification thresholds by utilizing LSTM to capture the entire Normalized Difference Vegetation Index (NDVI)trend. Our methodology involvestraining the LSTM model using a reference site and applying it to three separate three test sites. Firstly, we generated 25 NDVI imagesfrom the Sentinel-2A data. Aftersegmenting study areas, we calculated the mean NDVI values for each segment. For the reference area, employed a training approach utilizing the NDVI trend line. This trend line served as the basis for training our crop classification model. Following the training phase, we applied the trained model to three separate test sites. The results demonstrated a high overall accuracy of 0.92 and a kappa coefficient of 0.85 for the reference site. The overall accuracies for the test sites were also favorable, ranging from 0.88 to 0.92, indicating successful classification outcomes. We also found that certain phenological metrics can be less effective in crop classification therefore limitations of relying solely on phenological map thresholds and emphasizes the challenges in detecting phenology in real-time, particularly in the early stages of crops. Our study demonstrates the potential of LSTM models in crop classification tasks, showcasing their ability to capture temporal dependencies and analyze timeseriesremote sensing data.While limitations exist in capturing specific phenological events, the integration of alternative approaches holds promise for enhancing classification accuracy. By leveraging advanced techniques and considering the specific challenges of agricultural landscapes, we can continue to refine crop classification models and support agricultural management practices.

Keywords

1. Introduction

Crop classification from remote sensing data plays a crucial role in ensuring food security and sustainable agriculture (Karthikeyan et al., 2020). However, accurate spatiotemporal crop data remains challenging, especially for smallholder farms(Wang et al., 2022a; Wang et al., 2022b). The development of efficient crop mapping algorithms is essential for widespread application in large areas (Ashourloo et al., 2022). Nonetheless, crop mapping faces significant challenges in the Land Use/Land Cover (LULC) classification community (Rahimi et al., 2021; 2022). Limited availability of continuous in-situ data and intra-class variability and inter-class similarity pose obstacles (Foerster et al., 2012; Zeng et al., 2020). One approach is obtaining detailed images during the crop-growing season, but acquiring cloud-free images is challenging (Hu et al., 2019; Mahlayeye et al., 2022). Phenology-based algorithms offer an alternative by analyzing crop life cycles and temporal metrics (Liu et al., 2018; Qiu et al., 2015; Tian et al., 2019; Waldner et al., 2015). Vegetation phenology captures the annual plant cycle influenced by biological and non-biological factors (Zhang et al., 2022).

The goal of phenology-based analyses is to monitor and understand the fluctuations in phenological patterns specific to different crops. These patterns encompass the timing, length, and frequency of crop-related events (Arun and Karnieli, 2021). Phenology-based metrics have been utilized in crop-type mapping tasks to address the limitations associated with traditional crop classification methods (Zhong et al., 2016). Certain phenology-based classification approaches have demonstrated that incorporating essential features can enhance mapping accuracy by facilitating better differentiation among crop types. In this context, phase, and amplitude information obtained through Fourier transformation (FT) of time-series data is utilized to depict the vegetation condition throughout time (Mingwei et al., 2008).

Previous studies have demonstrated the significant potential of crop phenology algorithms in agricultural remote sensing mapping. These algorithms have successfully identified key phenological stages, such as the start and end of the growing season, peak growth, peak drought, and growth cycle duration (Araya et al., 2018; Filippa et al., 2016; Tan et al., 2010). However, phenology-based and thresholding methods require accurate threshold values and training data, posing challenges for large-scale mapping (Tang et al., 2016; Zhao et al., 2013). Moreover, these methods have limitations in accurately classifying small-sized crop fields.

In recent years, Deep Learning (DL) has emerged as the leading approach in computer vision, demonstrating superior performance in classifying both natural images and remote sensing data. Deep learning methods, characterized by their ability to learn hierarchical and abstract representations, have demonstrated superior performance compared to traditional machine learning techniques in a wide range of Earth Observation (EO) data applications (Ko et al., 2021; Kwak et al., 2020; Kwak and Park, 2021). DL approaches excel at transforming input data into intrinsic manifolds through unsupervised learning, resulting in improved outcomes and outcomes (Yuan et al., 2020).

When dealing with dense time series data, Recurrent Neural Network (RNN) methods have shown great promise due to their ability to analyze sequential information effectively. Among RNNs, the Long Short-Term Memory (LSTM) model, has gained popularity for its efficient capture of time correlations(Crisóstomo de Castro Filho et al., 2020). LSTM-based approaches utilize a recurrent guided architecture to capture and model sequential patterns. Nevertheless, many current deep learning classifiers, such as LSTMs, treat spectral curves as simple vectors, neglecting the incorporation of physically significant features (Arun et al., 2019; Zaremba et al., 2014).

LSTM enables the assessment of phenological variations in plantations by detecting pixel coherence within a series of time-based data (Guo et al., 2016; Kwak and Park, 2021; Rußwurm and Körner, 2017). For example, Zhu et al. (2021) introduce a pheno-deep method for mapping rice paddy distribution using LSTM. It combines the simplicity of phenological methods with the learning ability of deep learning. The pheno-deep method achieves high accuracy without requiring field samples, outperforming the phenological method alone. Its overall accuracy is only slightly lower than the deep learning alone method trained with field samples. This study demonstrates the potential of combining knowledge-based and data-driven methods for accurate mapping using remote sensing, even with limited field sampling efforts. Arun and Karnieli (2021) also propose a variational capsule network (VCapsNet) for crop classification using time-series vegetation index (VI) curves. VCapsNet learns phenological curve features, combines denoising and classification optimization, and improves accuracy, even with limited training samples. The approach successfully identifies crop transitions and shows applicability to acreage estimation and other scales.

It appears that there has been limited attention given to deep learning methods, such as RNNs and LSTM, for crop classification based on phenology. These advanced deep-learning techniques have shown promising results in various applications but have not been extensively explored in the context of crop classification using phenological information. There is a potential for further research and development in this area to harness the capabilities of deep learning models for accurate and efficient crop classification based on phenology (Crisóstomo de Castro Filho et al., 2020; Katal et al., 2022).

Mapping crops accurately in regions with frequent rain and cloud cover poses significant challenges, especially when dealing with small agricultural parcels. Traditional phenology-based approaches may face limitations in capturing the temporal dynamics of crop growth due to the inadequate availability of cloud-free images. Alternatively, deep learning methods have shown promise in crop classification, relying on training data and spectral characteristics. In regions like South Korea with a high frequency of rainy and cloudy days, particularly during the peak vegetation growth period, and where agricultural parcels are predominantly small (less than 1 hectare), mapping crops using phenology-based approaches can be challenging (Jain et al., 2013).

Therefore, taking into consideration the challenges associated with finding thresholds in phenology-based crop mapping and the variability in phenological measurements of the same crop due to different cultivation times, our study aims to explore the effectiveness of an LSTM phenology-based approach in crop classification. By utilizing the entire NDVI dynamics graph and training the model based on the complete temporal information, rather than relying on specific points or thresholds, we seek to assess the comparative advantages and performance of the LSTM approach in accurately classifying crops. We believe this approach has the potential to overcome the limitations of traditional threshold-based methods and provide more robust and reliable crop classification results.

2. Materials and Methods

2.1. Study Area

Our study area is in Andong City which is the capital of North Gyeongsang Province in South Korea. Andong serves as a hub for the surrounding agricultural areas. It plays a vital role in the distribution and trade of agricultural products produced in the region. The city’s economic activity is closely tied to agriculture, and its market plays an essential role in supporting local farmers and the agricultural industry. After conducting field surveys in the study area, several crop types were identified, including rice, apple, and man-made structures. These classes were deemed relevant for further analysis and were therefore considered in the subsequent stages of the study. In this study, we designated a specific area as the reference area, which served as a benchmark for evaluating our model’s performance.

Additionally, we selected three separate test areas to assess the generalizability and robustness of our model across different spatial contexts. Fig. 1 depicts the geographical position of the study area of South Korea. The satellite imagery displayed is from Sentinel-2A and was captured on June 18, 2022. The dark green areas depicted in this figure indicate rice crops, providing visual evidence that our study area is predominantly characterized by rice cultivation.

OGCSBN_2024_v40n1_57_f0001.png 이미지

Fig. 1. Land use map of South Korea (Buchhorn et al., 2020) and the geographic location of the study areas within South Korea. (a) Reference area, (b) test area 1, (c) test area 2, and (d) test area 3.

2.2. Data

In recent years, the launch of the European Space Agency’s twin Sentinel-2 satellites has revolutionized data availability in remote sensing (Misra et al., 2020). These satellites offer high-resolution data with a resolution of 10 meters and revisit the same location every five days, leading to a significant improvement in acquiring cloud-free images. However, despite these advancements, accessing and utilizing dense high-resolution datasets for retrospective analyses remains a challenge. Nevertheless, the availability of Sentinel-2 images presents a valuable resource for capturing time series imagery (Misra et al., 2020). This unique combination of high spatial and temporal resolution provides an exceptional opportunity to gather detailed information about the dynamics of crop phenology. By harnessing these datasets, researchers can gain comprehensive insights into the temporal patterns and changes in crop growth stages overtime. In our study, we utilized Sentinel-2A images from the year 2022 for extracting the NVDI time series. During our image selection process, we prioritized cloud-free images that required minimal atmospheric correction. As a result, we identified 25 suitable images for NDVI extraction in our study area (Table 1). It is worth noting that we specifically focused on images with a spatial resolution of 10 meters.

Table 1. Distribution of Sentinel dataset across different months of the year

OGCSBN_2024_v40n1_57_t0001.png 이미지

2.3. Phenology Metrics Calculation

For the phenology-based crop mapping, we initially computed 25 Normalized Difference Vegetation Index (NDVI) images for the study area. To smooth the NDVI values, we utilized the “sgolayfilt” function in the “signal” R package. This function applies a Savitzky-Golay 5 × 5 filter, which is a widely used method for smoothing noisy data (Araya et al., 2018). The filter helps to reduce fluctuations in the NDVI time series, providing a more continuous and representative pattern of vegetation growth over time. Subsequently, we calculated 10 phenology metrics for each pixel in the study area, as outlined in Table 2. The definition and description of each metric are provided in Table 2 to facilitate comprehension. Fig. 2 complements the understanding of these metrics by presenting a schematic representation of nine of them.

Table 2. Definition of the phonological indices calculated in the study

OGCSBN_2024_v40n1_57_t0002.png 이미지

OGCSBN_2024_v40n1_57_f0002.png 이미지

Fig. 2. The NDVI dynamics curve shows eight phonological metrics. DOY: day of the year.

In particular, the metrics “Min” and “Max” correspond to the minimum and maximum values of the NDVI trend line, respectively. The “Difference” metric is obtained by subtracting the minimum value from the maximum value. The metrics “TMin” and “TMax” represent the dates corresponding to the minimum and maximum NDVI values, respectively. The “SOS” (Start of Season) and “EOS” (End of Season) metrics indicate the dates when the NDVI value first time reaches 0.2 and last time reaches 0.2 on the opposite side of the NDVI trend line, respectively. By subtracting the SOS from the EOS, we obtain the “Timelength” metric, which represents the duration of the growing season. The “Count 0.2” metric represents the number of dates in which the NDVI values are greater than 0.2. NDVI values range between –1 and 1. Negative NDVI values, close to –1, typically correspond to water bodies. Values close to zero (–0.1 to 0.1) generally indicate areas covered with barren rock, sand, or snow. Low positive NDVI values (> 0.2) are associated with vegetation (Guha et al., 2021). The “TafterMax” metric indicates the time it takes for the crop reflectance to reach 0.2 again after reaching the maximum NDVI value. The “AUC” (Area Under Curve) metric quantifies the area under the NDVI curve, which provides an overall measure of the vegetation growth intensity throughout the growing season and was calculated using the “trapz” function in R software. All other phenology metrics were also calculated using R software.

2.4. Segment-Based Phenology Extraction

The phenology classification procedure involved several steps. Firstly, the target image obtained on June 18, 2022, was segmented using eCognition software. The segmentation analysis was performed using a scale parameter of 20 and a color threshold of 0.9. This process aimed to group similar pixels based on their spectral characteristics and spatial relationships. The analysis resulted in the generation of an output shapefile containing 1,518 segments for the reference area. Additionally, separate shapefiles were created for the test areas, with test area 1 containing 522 segments, test area 2 containing 1,189 segments, and test area 3 containing 847 segments. Each segment represented a distinct region within the study area, characterized by homogeneous attributes such as color and texture. Average NDVI values were then calculated using R software for each segment.

In the next step, we used ArcGIS software to extract mean NDVI values into an Excel format for subsequent analysis in R software. This allowed for the calculation of 10 phenology metrics for each of the study areas. Then, ArcGIS was employed to convert the analyzed data back into a raster format. This process assigned the calculated phenology metric values to their respective locations on the raster grid, resulting in phenology maps. These maps provided a spatial representation of the distribution of the phenology metrics across the study area.

2.5. LSTM

The LSTM layers are responsible for capturing the sequential patterns and dependencies in the input time series data. LSTM networks are a type of RNNs that can effectively model temporal dependencies. By processing the input time series data through the LSTM layers, the model learns to recognize and extract relevant patterns, trends, and relationships present in the data. These learned features are then used by the subsequent layers (dropout and dense layers)to make predictions or classifications. In essence, the model learns to extract important temporal features from the input time series data implicitly through the training process, without explicitly specifying or predefining the features. This is one of the strengths of deep learning models like LSTMs, as they can automatically learn and adapt to the inherent characteristics of the input data (Kwak et al., 2019; 2020).

To assess the efficiency of the LSTM model, we focused on the reference area and utilized it for extracting both training and test data. For each of the apple, rice, and man-made classes, we specifically selected 80 rows from an Excel dataset astraining data. Additionally, we reserved 20 rows for accuracy assessment purposes. In our study, we employed an LSTM model for classification, utilizing 128 units in each LSTM layer. Dropout regularization with a rate of 0.3 was applied between the LSTM layers to mitigate overfitting. The model architecture consisted of multiple LSTM layers, which were further enhanced by incorporating dropout layers. This configuration aimed to improve the model’s generalization capabilities and prevent overfitting by randomly dropping out a fraction of the units during training. To train and evaluate the LSTM model, we compiled it using the Adam optimizer, which is a popular optimization algorithm for deep learning models. The loss function chosen for this classification task was ‘sparse_categorical_crossentropy’, which is suitable for multi-class classification problems. In addition, we selected ‘accuracy’ as the metric to monitor during training.

After compiling the model, we proceeded to fit it into the training data. During training, we ran the model for 10 epochs, meaning that the entire training dataset was passed through the model 10 times. The batch size was set to 32, which determines the number of samples processed by the model before updating the weights. To evaluate the efficiency of our LSTM model, we conducted separate testing on three distinct test areas. These test areas were chosen to represent different geographic regions or conditions that might affect the classification performance. By applying the trained model to each test area, we were able to assess its generalization capabilities and determine its performance in different settings. This approach allowed us to evaluate the robustness and effectiveness of the model across multiple scenarios (Fig. 3).

OGCSBN_2024_v40n1_57_f0003.png 이미지

Fig. 3. Flowchart of methodology.

2.6. Accuracy Assessment

To calculate the overall accuracy and kappa coefficients for each classified map, a total of 300 random points were generated across the 4 study areas. These points were then cross-referenced with Google Earth imagery and the previously collected field data to determine the actual crop type for each point. By comparing the assigned crop type from the classified map with the actual crop type, and confusion matrix, the overall accuracy and kappa coefficients were calculated as evaluation metrics for the classification accuracy.

3. Results

Fig. 4 presents examples of phenological maps generated for the study area. Among these maps, the “Min” metric map (Fig. 4a) reveals variations in minimum values across different pixels, indicating the potential for distinguishing between different crops. Conversely, the “Max” metric map (Fig. 4b) displays a uniform representation for all agricultural parcels. This uniformity underscores the unusualness of the “Max” metric in crop classification in this study. The “SOS” metric (Fig. 4c) exhibits distinct patterns that can highlight specific farms. However, the “EOS” metric map shows no ability in crop separation (Fig. 4d). The Count 0.2 metric (Fig. 4e) proves to be valuable as it highlights specific areas that align with the “SOS” metric map. AUC metric (Fig. 4f) also shows similar patterns to The Count 0.2 metric. The “Timelength” metric map (Fig. 4g) shares similarities with the “SOS” metric, both of which contribute potentially to the differentiation of farms. The “TMax” map (Fig. 4h) demonstrates a low potential for distinguishing between different crops.

OGCSBN_2024_v40n1_57_f0004.png 이미지

Fig. 4. Examples of phenological maps calculated for the reference area. (a) Min, (b) Max, (c) SOS, (d) EOS, (e) Count 0.2, (f) AUC, (g) Timelength, and (h) TMax.

Fig. 5 displays the NDVI dynamics curves of three classes of apple, rice, and man-made. The graph reveals distinct patterns in certain segments of the trend lines, indicating the potential for crop differentiation. For instance, the trend line for man-made structures consistently exhibits NDVI values below 0.2 throughout the year, suggesting that these structures can be easily classified. Similarly, other crops demonstrate unique patterns in the maximum NDVI values, allowing for differentiation based on the “AUC” metric. It is important to note, that these dynamic lines are drawn based on pure samples, and the actual classification of different crops may pose challenges in real-world scenarios.

OGCSBN_2024_v40n1_57_f0005.png 이미지

Fig. 5. The NDVI dynamics curve showing different crops of apple, rice and. man-made structure such as plastic house

3.1.Image Classification

Fig. 6 displays the classified maps of both the reference and test areas. The results indicate that the studied areas are primarily dominated by rice and apple orchards. However, compared to the other test areas, test area 3 (d) shows a significant absence of man-made class.

OGCSBN_2024_v40n1_57_f0006.png 이미지

Fig. 6. Crop classified maps of the (a) reference area, (b) test area 1, (c) test area 2, and (d) test area 3.

3.2. Accuracy Assessment of Classified Maps

Table 3 presents the classification accuracy results for three classified maps for the study area. Confusion matrices were generated using ground truth points, allowing for the calculation of overall accuracy and kappa coefficient for the classified maps. The overall accuracies achieved for all maps are deemed acceptable, surpassing 88%. Additionally, the kappa coefficients range from 0.74 to 0.85, indicating variations in the classification schemes employed in this study. The kappa coefficient is a measure of agreement between the classified maps and the ground truth data, with values closer to 1 indicating a higher level of agreement. The range of kappa coefficients suggests that different classification approaches or methods used in the study have yielded varying levels of agreement with the ground truth.

Table 3. The classification accuracy of the crop maps

OGCSBN_2024_v40n1_57_t0003.png 이미지

4. Discussion

Authors should discuss the results and how they can be interpreted from the perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted. In this study, we employed LSTM as a recurrent neural network to classify the two main crops, apples and rice. The idea behind this approach was to overcome the time-consuming task of finding thresholds for phenology-based classification. Instead, we utilized LSTM to capture the entire NDVI trend and trained the model to differentiate between the studied crops. Fig. 3 demonstrates that certain phenological metrics are not effective in distinguishing between apple and rice crops.

However, Fig. 4 reveals significant phenological variations for these crops. Despite this, relying solely on phenological map thresholds remains challenging for accurately differentiating between apple and rice crops. These limitations can make it challenging to detect phenology in real-time and manage fields during the early stages of crops, especially when the available data only capture a portion of the growing season or the vegetation index trajectory does not reach its peak (Yang et al., 2020).

For instance, in Fig. 3, phenological metrics such as Min, Max, EOS, Timelength, and TMax, do not appear to be reliable metrics for crop classification based on phenology thresholds. However, metrics like SOS, count 0.2, and AUC exhibit more promising potential for achieving accurate crop classification based on phenological characteristics. These metrics may provide valuable insights and could be more suitable for differentiating between apple and rice crops.

In this study, we evaluated the effectiveness of LSTM in crop classification by training the model using data from a reference site and then applying the trained model to three separate test sites. The results from the reference site showed a high classification accuracy and kappa coefficient, indicating that the LSTM method can be efficient for crop classification even with a limited amount of training  data. However, when we applied the trained model to the test areas where no training data was available, the overall accuracy and kappa coefficient decreased. Despite this, the overall accuracy of the test areas remained relatively high, exceeding 0.88. These findings suggest that the LSTM approach holds promise for crop classification in different areas, but the performance may vary depending on the availability of training data specific to each location.

We worked on small-scale agricultural areas in South Korea, where farms are typically small and primarily cultivate rice. Due to the nature of these small-scale farms, crop classification can be particularly challenging. The limited spatial extent of the study areas added a layer of complexity to the task. However, despite these challenges, we were able to apply the LSTM model and achieve promising results in crop classification.

This highlights the potential of LSTM and deep learning approaches to overcome the difficulties associated with classifying crops in small-scale agricultural landscapes. By leveraging the power of these advanced techniques, we can enhance our understanding of crop patterns and support agricultural management in such regions. However, other studies claim that time-series vegetation index-based approaches, such as threshold-based, and RNN-based methods might not be well-suited for small holder farmers (Yang et al., 2020).

Zhu et al. (2021) also propose a method called “pheno-deep” that combines phenological methods with deep learning for mapping rice paddy distribution from remote sensing images. The pheno-deep method achieves high accuracy without the need for field samples, overcoming the limitations of both phenological methods and deep learning alone.They demonstrated that combining knowledge-based and data-driven methods can achieve accurate mapping without extensive field sampling efforts. Li et al.(2020) also applied a novel method that combines generative adversarial network (GAN), convolutional neural network (CNN), and long- and LSTM models for crop classification from remote sensing time-series images. The proposed method achieved the best classification results with a Kappa coefficient of 0.79 and an overall accuracy of 0.86.

It seems that LSTM models have garnered increasing attention in the field of crop classification. Their ability to capture temporal dependencies and learn from sequential data makes them well-suited for analyzing time-series remote sensing images and extracting valuable information for crop classification tasks, especially in South Korea. For example, Kwak et al.(2019) utilized a hybrid deep learning approach named 2D Convolution with Bidirectional LSTM (2DCBLSTM), demonstrating its efficacy in integrating spatial and temporal features for crop classification in South Korea. In this proposed model, spatial features of crops are initially extracted using 2D convolution operators, which are subsequently fed into a bidirectional LSTM model to effectively handle temporal features. In another study, Kwak et al. (2020) explored the capability of bidirectional LSTM (Bi-LSTM) in effectively capturing temporal information for crop classification with multi temporal remote sensing images. Their findings suggest the efficacy of Bi-LSTM in this context, especially in scenarios with limited input images. As a result, researchers have been exploring and harnessing the potential of LSTM models in agricultural applications, including crop classification, to enhance the monitoring and understanding of crop conditions and food production.

However, one of the limitations associated with LSTM classifiers is their primary focus on capturing sequential patterns and dependencies in data, while disregarding the specific characteristic features and their spatial arrangement within the sequence (Arun et al., 2019). LSTM models are designed to learn long-term dependencies by maintaining a memory state that retains information from earlier time steps. However, they treat the input data as a sequence of values without explicitly considering the underlying characteristics orthe spatial relationships among the features. This can pose challenges, particularly in capturing and differentiating specific phenological events, such as the start, duration, and occurrences of crop events.

If the sequential patterns alone do not provide sufficient discriminative information, LSTM classifiers may struggle to distinguish between different phenological events happening at similar time steps (Kwak et al., 2019). To address this limitation, alternative approaches like capsule-based feature learning or the incorporation of additional feature extraction techniques can be employed. These methods aim to capture and leverage the characteristic features of events and their relative spatial locations within the sequence, thereby enhancing classification accuracy and overall performance (Arun and Karnieli, 2021).

5. Conclusions

In conclusion, our study demonstrates the potential of LSTM modelsfor crop classification tasks, particularly in distinguishing between apple and rice crops. By utilizing the entire NDVI trend and training the model to capture temporal dependencies, we achieved favorable classification results. However, the limitations of relying solely on phenological map thresholds were evident, and certain phenological metrics proved to be less effective in differentiating between the studied crops. We also observed that the performance of LSTM models in crop classification can be influenced by factors such as the availability of training data and the spatial extent of the study areas. Despite the challenges associated with small-scale agricultural landscapes, our study demonstrates the feasibility of applying LSTM models in such contexts. However, it is worth noting that other studies suggest limitations of time-series vegetation index-based approaches for smallholder farmers. The increasing attention paid to LSTM models in crop classification reflects their ability to capture temporal dependencies and analyze time-series remote sensing images effectively.

However, the inherent focus on sequential patterns poses challenges in capturing specific phenological events and their discriminative features. To address this, alternative approaches such as capsule-based feature learning and additional feature extraction techniques can be employed to enhance classification accuracy and performance. Overall, our study contributes to the growing body of research exploring the potential of LSTM models in agricultural applications, particularly in crop classification. By leveraging advanced techniques and considering the specific challenges and requirements of different agricultural landscapes, we can continue to improve our understanding of crop conditions, support food production, and enhance agricultural management practices. Future studies should further explore and refine the integration of different methods to maximize the accuracy and applicability of crop classification models.

Acknowledgments

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. NRF-2018R1A6A1A 03024862) and Rural Development Administration, Agenda project on pollination network RS-2023-00232335.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

  1. Araya, S., Ostendorf, B., Lyle, G., and Lewis, M., 2018. Crop Phenology: An R package for extracting crop phenology from time series remotely sensed vegetation index imagery. Ecological Informatics, 46, 45-56. https://doi.org/10.1016/j.ecoinf.2018.05.006
  2. Arun, P. V., Buddhiraju,K. M., and Porwal,A., 2019.Capsulenetbased spatial-spectral classifierfor hyperspectral images. IEEEJournal of SelectedTopicsinAppliedEarthObservations and Remote Sensing, 12(6), 1849-1865. https://doi.org/10.1109/JSTARS.2019.2913097
  3. Arun, P. V., and Karnieli, A., 2021. Deep learning-based phenological event modeling for classification of crops. Remote Sensing, 13(13), 2477. https://doi.org/10.3390/rs13132477
  4. Ashourloo, D., Nematollahi, H., Huete, A., Aghighi, H., Azadbakht, M., Shahrabi, H. S., and Goodarzdashti, S., 2022. A new phenology-based method for mapping wheat and barley using time-series of Sentinel-2 images. Remote Sensing of Environment, 280, 113206. https://doi. org/10.1016/j.rse.2022.113206
  5. Buchhorn, M., Smets, B., Bertels, L., De Roo, B., Lesiv, M., Tsendbazar, N.-E., Herold, M., and Fritz, S., 2020. Copernicus global land service: Land cover 100m:collection 3: epoch 2019: Globe. Version V3.0.1. https://doi.org/10.5281/zenodo.3518038
  6. Crisostomo de Castro Filho, H., Abilio de Carvalho Junior, O., Ferreira de Carvalho, O. L., Pozzobon de Bem, P., dos Santos de Moura, R., Olino de Albuquerque, A. et al., 2020. Rice crop detection using LSTM, Bi-LSTM, and machine learning models from Sentinel-1 time series. Remote Sensing, 12(16), 2655. https://doi.org/10.3390/rs12162655
  7. Filippa, G.,Cremonese, E., Migliavacca, M., Galvagno, M., Forkel, M., Wingate, L., Tomelleri, E., Di Cella, U. M., and Richardson, A. D., 2016. Phenopix: AR package for image-based vegetation phenology. Agricultural and Forest Meteorology, 220, 141-150. https://doi.org/10.1016/j.agrformet.2016.01.006
  8. Foerster, S., Kaden, K., Foerster, M., and Itzerott, S., 2012. Crop type mapping using spectral-temporal profiles and phenological information. Computers and Electronics in Agriculture, 89, 30-40. https://doi.org/10.1016/j.compag.2012.07.015
  9. Guha, S., Govil, H., Gill, N., and Dey, A., 2021. A long-term seasonal analysis on the relationship between LST and NDBI using Landsat data. Quaternary International, 575-576, 249-258. https://doi.org/10.1016/j.quaint.2020.06.041
  10. Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., and Lew, M. S., 2016. Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48. https://doi.org/10.1016/j.neucom.2015.09.116
  11. Hu, Q., Sulla-Menashe, D., Xu, B., Yin, H., Tang, H., Yang, P., and Wu, W., 2019. A phenology-based spectral and temporal feature selection method for crop mapping from satellite time series.International Journal ofApplied Earth Observation and Geoinformation, 80, 218-229. https://doi.org/10.1016/j.jag.2019.04.014
  12. Jain, M., Mondal, P., DeFries, R. S., Small, C., and Galford, G. L., 2013. Mapping cropping intensity of smallholder farms: Acomparison of methods using multiple sensors. Remote Sensing of Environment, 134, 210-223. https://doi.org/10.1016/j.rse.2013.02.029
  13. Karthikeyan, L., Chawla, I., and Mishra, A. K., 2020. A review of remote sensing applications in agriculture for food security: Crop growth and yield, irrigation, and crop losses. Journal of Hydrology, 586, 124905. https://doi.org/10.1016/j.jhydrol.2020.124905
  14. Katal, N., Rzanny, M., Mader, P., and Waldchen, J., 2022. Deep learning in plant phenological research: A systematic literature review. Frontiers in Plant Science, 13, 805738. https://doi.org/10.3389/fpls.2022.805738
  15. Ko, K.-S., Kim, Y.-W., Byeon, S.-H., and Lee, S.-J., 2021. LSTM based prediction of ocean mixed layertemperature using meteorological data. Korean Journal of Remote Sensing, 37(3), 603-614. https://doi.org/10.7780/kjrs.2021.37.3.19
  16. Kwak, G.-H., Park, C.-W., Ahn, H.-Y., Na, S.-I., Lee, K.-D., and Park, N.-W., 2020. Potential of bidirectional long shortterm memory networks for crop classification with multitemporal remote sensing images. Korean Journal of Remote Sensing, 36(4), 515-525. https://doi.org/10.7780/kjrs.2020.36.4.2
  17. Kwak, G.-H., Park, M.-G., Park,C.-W., Lee,K.-D., Na, S.-I.,Ahn, H.-Y., and Park, N.-W., 2019. Combining 2D CNN and bidirectional LSTM to consider spatio-temporal features in crop classification. Korean Journal of Remote Sensing, 35(5-1), 681-692. https://doi.org/10.7780/kjrs.2019.35.5.1.5
  18. Kwak, G.-H., and Park, N.-W., 2021. Two-stage deep learning modelwith LSTM-based autoencoder andCNN for crop classification using multi-temporalremote sensing images. Korean Journal of Remote Sensing, 37(4), 719-731. https://doi.org/10.7780/kjrs.2021.37.4.4
  19. Li, J., Shen, Y., and Yang, C., 2020. An adversarial generative network for crop classification from remote sensing timeseries images. Remote Sensing, 13(1), 65. https://doi.org/10.3390/rs13010065
  20. Liu, J., Zhu, W., Atzberger, C., Zhao, A., Pan, Y., and Huang, X., 2018. A phenology-based method to map cropping patterns under a wheat-maize rotation using remotely sensed time-series data. Remote Sensing, 10(8), 1203. https://doi.org/10.3390/rs10081203
  21. Mahlayeye,M., Darvishzadeh,R., andNelson,A., 2022.Cropping patterns of annual crops: A remote sensing review. Remote Sensing, 14(10), 2404. https://doi.org/10.3390/rs14102404
  22. Mingwei, Z., Qingbo, Z., Zhongxin, C., Jia, L., Yong, Z., and Chongfa, C., 2008. Crop discrimination in Northern China with double cropping systems using Fourier analysis of time-series MODIS data.International Journal of Applied Earth Observation and Geoinformation, 10(4), 476-485. https://doi.org/10.1016/j.jag.2007.11.002
  23. Misra, G., Cawkwell, F., and Wingler, A., 2020. Status of phenological research using Sentinel-2 data: A review. Remote Sensing, 12(17), 2760. https://doi.org/10.3390/rs12172760
  24. Qiu, B., Li, W., Tang, Z., Chen, C., and Qi, W., 2015. Mapping paddy rice areas based on vegetation phenology and surface moisture conditions. Ecological Indicators, 56, 79-86. https://doi.org/10.1016/j.ecolind.2015.03.039
  25. Rahimi, E., Barghjelveh, S., and Dong, P., 2021. Quantifying how urban landscape heterogeneity affects land surface temperature at multiple scales. Journal of Ecology and Environment, 45(1), 1-13. https://doi.org/10.1186/s41610-021-00203-z
  26. Rahimi, E., Barghjelveh, S., and Dong, P., 2022. A comparison of discrete and continuous metricsfor measuring landscape changes. Journal of the Indian Society of Remote Sensing, 50(7), 1257-1273. https://doi.org/10.1007/s12524-022-01526-7
  27. Russwurm, M., and Korner, M., 2017. Multi-temporal land cover classification with long short-term memory neural networks.The InternationalArchives ofthePhotogrammetry, Remote Sensing and SpatialInformation Sciences, 42, 551-558. https://doi.org/10.5194/isprs-archives-XLII-1-W1-551-2017, 2017.
  28. Tan, B., Morisette, J. T., Wolfe, R. E., Gao, F., Ederer, G. A., Nightingale, J., and Pedelty, J. A., 2010. An enhanced TIMESAT algorithm for estimating vegetation phenology metricsfrom MODIS data.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 4(2), 361-371. https://doi.org/10.1109/JSTARS.2010.2075916
  29. Tang, J., Korner, C., Muraoka, H., Piao, S., Shen, M., Thackeray, S. J., and Yang, X., 2016. Emerging opportunities and challenges in phenology: A review. Ecosphere, 7(8), e01436. https://doi.org/10.1002/ecs2.1436
  30. Tian, H., Huang, N., Niu, Z., Qin, Y., Pei, J., and Wang, J., 2019. Mappingwinter cropsinChinawith multi-source satellite imagery and phenology-based algorithm. Remote Sensing, 11(7), 820. https://doi.org/10.3390/rs11070820
  31. Waldner, F., Canto, G. S., and Defourny, P., 2015. Automated annual croplandmappingusing knowledge-basedtemporal features. ISPRS Journal of Photogrammetry and Remote Sensing, 110, 1-13. https://doi.org/10.1016/j.isprsjprs.2015.09.013
  32. Wang, Y., Fang, S., Zhao, L., Huang, X., and Jiang, X., 2022a. Parcel-based summer maize mapping and phenology estimation combined using Sentinel-2 and time series Sentinel-1 data. International Journal of Applied Earth Observation and Geoinformation, 108, 102720. https://doi.org/10.1016/j.jag.2022.102720
  33. Wang, Y., Zhang, Z., Zuo, L., Wang, X., Zhao, X., and Sun, F., 2022b. Mapping crop distribution patterns and changes in China from 2000 to 2015 by fusing remote-sensing, statistics, and knowledge-based crop phenology. Remote Sensing, 14(8), 1800. https://doi.org/10.3390/rs14081800
  34. Yuan, Q., Shen, H., Li, T., Li, Z., Li, S., Jiang, Y. et al., 2020. Deep learning in environmentalremote sensing:Achievements and challenges. Remote Sensing of Environment, 241, 111716. https://doi.org/10.1016/j.rse.2020.111716
  35. Zaremba, W., Sutskever, I., and Vinyals, O., 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409. 2329. https://doi.org/10.48550/arXiv.1409.2329
  36. Zeng, L., Wardlow, B. D., Xiang, D., Hu, S., and Li, D., 2020. A review of vegetation phenological metrics extraction using time-series, multispectral satellite data. Remote Sensing of Environment, 237, 111511. https://doi.org/10.1016/j.rse.2019.111511
  37. Zhang,J.,Chen, S.,Wu,Z., andFu,Y.H., 2022.Reviewof vegetation phenology trendsin China in a changing climate. Progress in Physical Geography: Earth and Environment, 46(6), 829-845. https://doi.org/10.1177/03091333221114737
  38. Zhao, M., Peng, C., Xiang, W., Deng, X., Tian, D., Zhou, X. et al., 2013. Plant phenological modeling and its application in global climate change research: Overview and future challenges. Environmental Reviews, 21(1), 1-14. https://doi.org/10.1139/er-2012-0036
  39. Zhong, L., Hu, L., Yu, L., Gong, P., and Biging, G. S., 2016. Automatedmapping ofsoybeanand cornusing phenology. ISPRS Journal of Photogrammetry and Remote Sensing, 119, 151-164. https://doi.org/10.1016/j.isprsjprs.2016.05.014
  40. Zhu,A.-X., Zhao, F.-H., Pan, H.-B., and Liu,J.-Z., 2021. Mapping rice paddy distribution using remote sensing by coupling deep learning with phenological characteristics. Remote Sensing, 13(7), 1360. https://doi.org/10.3390/rs13071360