• Title/Summary/Keyword: Network division

Search Result 3,137, Processing Time 0.03 seconds

Analysis of Applicability of RPC Correction Using Deep Learning-Based Edge Information Algorithm (딥러닝 기반 윤곽정보 추출자를 활용한 RPC 보정 기술 적용성 분석)

  • Jaewon Hur;Changhui Lee;Doochun Seo;Jaehong Oh;Changno Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.387-396
    • /
    • 2024
  • Most very high-resolution (VHR) satellite images provide rational polynomial coefficients (RPC) data to facilitate the transformation between ground coordinates and image coordinates. However, initial RPC often contains geometric errors, necessitating correction through matching with ground control points (GCPs). A GCP chip is a small image patch extracted from an orthorectified image together with height information of the center point, which can be directly used for geometric correction. Many studies have focused on area-based matching methods to accurately align GCP chips with VHR satellite images. In cases with seasonal differences or changed areas, edge-based algorithms are often used for matching due to the difficulty of relying solely on pixel values. However, traditional edge extraction algorithms,such as canny edge detectors, require appropriate threshold settings tailored to the spectral characteristics of satellite images. Therefore, this study utilizes deep learning-based edge information that is insensitive to the regional characteristics of satellite images for matching. Specifically,we use a pretrained pixel difference network (PiDiNet) to generate the edge maps for both satellite images and GCP chips. These edge maps are then used as input for normalized cross-correlation (NCC) and relative edge cross-correlation (RECC) to identify the peak points with the highest correlation between the two edge maps. To remove mismatched pairs and thus obtain the bias-compensated RPC, we iteratively apply the data snooping. Finally, we compare the results qualitatively and quantitatively with those obtained from traditional NCC and RECC methods. The PiDiNet network approach achieved high matching accuracy with root mean square error (RMSE) values ranging from 0.3 to 0.9 pixels. However, the PiDiNet-generated edges were thicker compared to those from the canny method, leading to slightly lower registration accuracy in some images. Nevertheless, PiDiNet consistently produced characteristic edge information, allowing for successful matching even in challenging regions. This study demonstrates that improving the robustness of edge-based registration methods can facilitate effective registration across diverse regions.

Monitoring soybean growth using L, C, and X-bands automatic radar scatterometer measurement system (L, C, X-밴드 레이더 산란계 자동측정시스템을 이용한 콩 생육 모니터링)

  • Kim, Yi-Hyun;Hong, Suk-Young;Lee, Hoon-Yol;Lee, Jae-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.191-201
    • /
    • 2011
  • Soybean has widely grown for its edible bean which has numerous uses. Microwave remote sensing has a great potential over the conventional remote sensing with the visible and infrared spectra due to its all-weather day-and-night imaging capabilities. In this investigation, a ground-based polarimetric scatterometer operating at multiple frequencies was used to continuously monitor the crop conditions of a soybean field. Polarimetric backscatter data at L, C, and X-bands were acquired every 10 minutes on the microwave observations at various soybean stages. The polarimetric scatterometer consists of a vector network analyzer, a microwave switch, radio frequency cables, power unit and a personal computer. The polarimetric scatterometer components were installed inside an air-conditioned shelter to maintain constant temperature and humidity during the data acquisition period. The backscattering coefficients were calculated from the measured data at incidence angle $40^{\circ}$ and full polarization (HH, VV, HV, VH) by applying the radar equation. The soybean growth data such as leaf area index (LAI), plant height, fresh and dry weight, vegetation water content and pod weight were measured periodically throughout the growth season. We measured the temporal variations of backscattering coefficients of the soybean crop at L, C, and X-bands during a soybean growth period. In the three bands, VV-polarized backscattering coefficients were higher than HH-polarized backscattering coefficients until mid-June, and thereafter HH-polarized backscattering coefficients were higher than VV-, HV-polarized back scattering coefficients. However, the cross-over stage (HH > VV) was different for each frequency: DOY 200 for L-band and DOY 210 for both C and X-bands. The temporal trend of the backscattering coefficients for all bands agreed with the soybean growth data such as LAI, dry weight and plant height; i.e., increased until about DOY 271 and decreased afterward. We plotted the relationship between the backscattering coefficients with three bands and soybean growth parameters. The growth parameters were highly correlated with HH-polarization at L-band (over r=0.92).

Evaluation of Grade-Classification of Wood Waste in Korea by Characteristic Analysis (국내 폐목재 특성분석을 통한 등급화 평가)

  • Kim, Joung-Dae;Park, Joon-Seok;Do, In-Hwan;Hong, Soo-Youl;Oh, Gil-Jong;Chung, David;Yoon, Jung-In;Phae, Chae-Gun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.11
    • /
    • pp.1102-1110
    • /
    • 2008
  • This research was performed to analyze the characteristics of wood wastes from origin and to suggest grade-classification for them. Korean proximate analysis was conducted, and heating value, heavy metals and Cl concentrations were analyzed for gradeclassification. Wood wastes were sampled from forest, living, construction and demolition, and industrial areas with origin. Moisture content of most wood wastes was ranged in 5$\sim$10%. VS (volatile solids) and ash contents of them showed > 95% and < 5%, respectively. Most wood wastes except wood for growing mushroom permitted the standard (low heating value $\geq$ 3,500 kcal/kg) for refusederived fuel. CCA (Cr, Cu, As) concentration of wood wastes used in bench, wasted fishing boat, and railroad crosstie was higher than that of the other ones. Cl content showed approximately 1.3% in wood box for fish and $\leq$ 0.2% in the other wood wastes. Cl content of all wood wasted used in this research permitted the standard (Cl $\leq$ 0.2%, dry weight basis) for refuse-derived fuel. If the wood wastes were classified in 3-grade, plywoods would be in 2nd grade, and MDF (medium density fiber), wooden bench, painted electric wire drum, wasted fishing boat, and railroad crosstie be in 3rd grade.

Urban archaeological investigations using surface 3D Ground Penetrating Radar and Electrical Resistivity Tomography methods (3차원 지표레이다와 전기비저항 탐사를 이용한 도심지 유적 조사)

  • Papadopoulos, Nikos;Sarris, Apostolos;Yi, Myeong-Jong;Kim, Jung-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.56-68
    • /
    • 2009
  • Ongoing and extensive urbanisation, which is frequently accompanied with careless construction works, may threaten important archaeological structures that are still buried in the urban areas. Ground Penetrating Radar (GPR) and Electrical Resistivity Tomography (ERT) methods are most promising alternatives for resolving buried archaeological structures in urban territories. In this work, three case studies are presented, each of which involves an integrated geophysical survey employing the surface three-dimensional (3D) ERT and GPR techniques, in order to archaeologically characterise the investigated areas. The test field sites are located at the historical centres of two of the most populated cities of the island of Crete, in Greece. The ERT and GPR data were collected along a dense network of parallel profiles. The subsurface resistivity structure was reconstructed by processing the apparent resistivity data with a 3D inversion algorithm. The GPR sections were processed with a systematic way, applying specific filters to the data in order to enhance their information content. Finally, horizontal depth slices representing the 3D variation of the physical properties were created. The GPR and ERT images significantly contributed in reconstructing the complex subsurface properties in these urban areas. Strong GPR reflections and highresistivity anomalies were correlated with possible archaeological structures. Subsequent excavations in specific places at both sites verified the geophysical results. The specific case studies demonstrated the applicability of ERT and GPR techniques during the design and construction stages of urban infrastructure works, indicating areas of archaeological significance and guiding archaeological excavations before construction work.

Bioinformatic Analysis of the Canine Genes Related to Phenotypes for the Working Dogs (특수 목적견으로서의 품성 및 능력 관련 유전자들에 관한 생물정보학적 분석)

  • Kwon, Yun-Jeong;Eo, Jungwoo;Choi, Bong-Hwan;Choi, Yuri;Gim, Jeong-An;Kim, Dahee;Kim, Tae-Hun;Seong, Hwan-Hoo;Kim, Heui-Soo
    • Journal of Life Science
    • /
    • v.23 no.11
    • /
    • pp.1325-1335
    • /
    • 2013
  • Working dogs, such as rescue dogs, military watch dogs, guide dogs, and search dogs, are selected by in-training examination of desired traits, including concentration, possessiveness, and boldness. In recent years, genetic information has been considered to be an important factor for the outstanding abilities of working dogs. To characterize the molecular features of the canine genes related to phenotypes for working dogs, we investigated the 24 previously reported genes (AR, BDNF, DAT, DBH, DGCR2, DRD4, MAOA, MAOB, SLC6A4, TH, TPH2, IFT88, KCNA3, TBR2, TRKB, ACE, GNB1, MSTN, PLCL1, SLC25A22, WFIKKN2, APOE, GRIN2B, and PIK3CG) that were categorized to personality, olfactory sense, and athletic/learning ability. We analyzed the chromosomal location, gene-gene interactions, Gene Ontology, and expression patterns of these genes using bioinformatic tools. In addition, variable numbers of tandem repeat (VNTR) or microsatellite (MS) polymorphism in the AR, MAOA, MAOB, TH, DAT, DBH, and DRD4 genes were reviewed. Taken together, we suggest that the genetic background of the canine genes associated with various working dog behaviors and skill performance attributes could be used for proper selection of superior working dogs.

A Study on the Retrieval of River Turbidity Based on KOMPSAT-3/3A Images (KOMPSAT-3/3A 영상 기반 하천의 탁도 산출 연구)

  • Kim, Dahui;Won, You Jun;Han, Sangmyung;Han, Hyangsun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1285-1300
    • /
    • 2022
  • Turbidity, the measure of the cloudiness of water, is used as an important index for water quality management. The turbidity can vary greatly in small river systems, which affects water quality in national rivers. Therefore, the generation of high-resolution spatial information on turbidity is very important. In this study, a turbidity retrieval model using the Korea Multi-Purpose Satellite-3 and -3A (KOMPSAT-3/3A) images was developed for high-resolution turbidity mapping of Han River system based on eXtreme Gradient Boosting (XGBoost) algorithm. To this end, the top of atmosphere (TOA) spectral reflectance was calculated from a total of 24 KOMPSAT-3/3A images and 150 Landsat-8 images. The Landsat-8 TOA spectral reflectance was cross-calibrated to the KOMPSAT-3/3A bands. The turbidity measured by the National Water Quality Monitoring Network was used as a reference dataset, and as input variables, the TOA spectral reflectance at the locations of in situ turbidity measurement, the spectral indices (the normalized difference vegetation index, normalized difference water index, and normalized difference turbidity index), and the Moderate Resolution Imaging Spectroradiometer (MODIS)-derived atmospheric products(the atmospheric optical thickness, water vapor, and ozone) were used. Furthermore, by analyzing the KOMPSAT-3/3A TOA spectral reflectance of different turbidities, a new spectral index, new normalized difference turbidity index (nNDTI), was proposed, and it was added as an input variable to the turbidity retrieval model. The XGBoost model showed excellent performance for the retrieval of turbidity with a root mean square error (RMSE) of 2.70 NTU and a normalized RMSE (NRMSE) of 14.70% compared to in situ turbidity, in which the nNDTI proposed in this study was used as the most important variable. The developed turbidity retrieval model was applied to the KOMPSAT-3/3A images to map high-resolution river turbidity, and it was possible to analyze the spatiotemporal variations of turbidity. Through this study, we could confirm that the KOMPSAT-3/3A images are very useful for retrieving high-resolution and accurate spatial information on the river turbidity.

Immune Cells Are Differentially Affected by SARS-CoV-2 Viral Loads in K18-hACE2 Mice

  • Jung Ah Kim;Sung-Hee Kim;Jeong Jin Kim;Hyuna Noh;Su-bin Lee;Haengdueng Jeong;Jiseon Kim;Donghun Jeon;Jung Seon Seo;Dain On;Suhyeon Yoon;Sang Gyu Lee;Youn Woo Lee;Hui Jeong Jang;In Ho Park;Jooyeon Oh;Sang-Hyuk Seok;Yu Jin Lee;Seung-Min Hong;Se-Hee An;Joon-Yong Bae;Jung-ah Choi;Seo Yeon Kim;Young Been Kim;Ji-Yeon Hwang;Hyo-Jung Lee;Hong Bin Kim;Dae Gwin Jeong;Daesub Song;Manki Song;Man-Seong Park;Kang-Seuk Choi;Jun Won Park;Jun-Won Yun;Jeon-Soo Shin;Ho-Young Lee;Ho-Keun Kwon;Jun-Young Seo;Ki Taek Nam;Heon Yung Gee;Je Kyung Seong
    • IMMUNE NETWORK
    • /
    • v.24 no.2
    • /
    • pp.7.1-7.19
    • /
    • 2024
  • Viral load and the duration of viral shedding of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are important determinants of the transmission of coronavirus disease 2019. In this study, we examined the effects of viral doses on the lung and spleen of K18-hACE2 transgenic mice by temporal histological and transcriptional analyses. Approximately, 1×105 plaque-forming units (PFU) of SARS-CoV-2 induced strong host responses in the lungs from 2 days post inoculation (dpi) which did not recover until the mice died, whereas responses to the virus were obvious at 5 days, recovering to the basal state by 14 dpi at 1×102 PFU. Further, flow cytometry showed that number of CD8+ T cells continuously increased in 1×102 PFU-virus-infected lungs from 2 dpi, but not in 1×105 PFU-virus-infected lungs. In spleens, responses to the virus were prominent from 2 dpi, and number of B cells was significantly decreased at 1×105 PFU; however, 1×12 PFU of virus induced very weak responses from 2 dpi which recovered by 10 dpi. Although the defense responses returned to normal and the mice survived, lung histology showed evidence of fibrosis, suggesting sequelae of SARS-CoV-2 infection. Our findings indicate that specific effectors of the immune response in the lung and spleen were either increased or depleted in response to doses of SARS-CoV-2. This study demonstrated that the response of local and systemic immune effectors to a viral infection varies with viral dose, which either exacerbates the severity of the infection or accelerates its elimination.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.