• Title/Summary/Keyword: Grid search

Search Result 275, Processing Time 0.026 seconds

Optimization of PRISM Parameters and Digital Elevation Model Resolution for Estimating the Spatial Distribution of Precipitation in South Korea (남한 강수량 분포 추정을 위한 PRISM 매개변수 및 수치표고모형 최적화)

  • Park, Jong-Chul;Jung, Il-Won;Chang, Hee-Jun;Kim, Man-Kyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.3
    • /
    • pp.36-51
    • /
    • 2012
  • The demand for a climatological dataset with a regular spaced grid is increasing in diverse fields such as ecological and hydrological modeling as well as regional climate impact studies. PRISM(Precipitation-Elevation Regressions on Independent Slopes Model) is a useful method to estimate high-altitude precipitation. However, it is not well discussed over the optimization of PRISM parameters and DEM(Digital Elevation Model) resolution in South Korea. This study developed the PRISM and then optimized parameters of the model and DEM resolution for producing a gridded annual average precipitation data of South Korea with 1km spatial resolution during the period 2000-2005. SCE-UA (Shuffled Complex Evolution-University of Arizona) method employed for the optimization. In addition, sensitivity analysis investigates the change in the model output with respect to the parameter and the DEM spatial resolution variations. The study result shows that maximum radius within which station search will be conducted is 67km. Minimum radius within which all stations are included is 31km. Minimum number of stations required for cell precipitation and elevation regression calculation is four. Optimizing DEM resolution is $1{\times}1km$. This study also shows that the PRISM output very sensitive to DEM spatial resolution variations. This study contributes to improving the accuracy of PRISM technique as it applies to South Korea.

Locating Microseismic Events using a Single Vertical Well Data (단일 수직 관측정 자료를 이용한 미소진동 위치결정)

  • Kim, Dowan;Kim, Myungsun;Byun, Joongmoo;Seol, Soon Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.18 no.2
    • /
    • pp.64-73
    • /
    • 2015
  • Recently, hydraulic fracturing is used in various fields and microseismic monitoring is one of the best methods for judging where hydraulic fractures exist and how they are developing. When locating microseismic events using single vertical well data, distances from the vertical array and depths from the surface are generally decided using time differences between compressional (P) wave and shear (S) wave arrivals and azimuths are calculated using P wave hodogram analysis. However, in field data, it is sometimes hard to acquire P wave data which has smaller amplitude than S wave because microseismic data often have very low signal to noise (S/N) ratio. To overcome this problem, in this study, we developed a grid search algorithm which can find event location using all combinations of arrival times recorded at receivers. In addition, we introduced and analyzed the method which calculates azimuths using S wave. The tests of synthetic data show the inversion method using all combinations of arrival times and receivers can locate events without considering the origin time even using only single phase. In addition, the method can locate events with higher accuracy and has lower sensitivity on first arrival picking errors than conventional method. The method which calculates azimuths using S wave can provide reliable results when the dip between event and receiver is relatively small. However, this method shows the limitation when dip is greater than about $20^{\circ}$ in our model test.

Weighted Energy Detector for Detecting Uunknown Threat Signals in Electronic Warfare System in Weak Power Signal Environment (전자전 미약신호 환경에서 미상 위협 신호원의 검출 성능 향상을 위한 가중 에너지 검출 기법)

  • Kim, Dong-Gyu;Kim, Yo-Han;Lee, Yu-Ri;Jang, Chungsu;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.3
    • /
    • pp.639-648
    • /
    • 2017
  • Electronic warfare systems for extracting information of the threat signals can be employed under the circumstance where the power of the received signal is weak. To precisely and rapidly detect the threat signals, it is required to use methods exploiting whole energy of the received signals instead of conventional methods using a single received signal input. To utilize the whole energy, numerous sizes of windows need to be implemented in a detector for dealing with all possible unknown length of the received signal because it is assumed that there is no preliminary information of the uncooperative signals. However, this grid search method requires too large computational complexity to be practically implemented. In order to resolve this complexity problem, an approach that reduces the number of windows by selecting the smaller number of representative windows can be considered. However, each representative window in this approach needs to cover a certain amount of interval divided from the considering range. Consequently, the discordance between the length of the received signal and the window sizes results in degradation of the detection performance. Therefore, we propose the weighted energy detector which results in improved detection performance comparing with the conventional energy detector under circumstance where the window size is smaller than the length of the received signal. In addition, it is shown that the proposed method exhibits the same performance under other circumstances.

Computational Optimization of Bioanalytical Parameters for the Evaluation of the Toxicity of the Phytomarker 1,4 Napthoquinone and its Metabolite 1,2,4-trihydroxynapththalene

  • Gopal, Velmani;AL Rashid, Mohammad Harun;Majumder, Sayani;Maiti, Partha Pratim;Mandal, Subhash C
    • Journal of Pharmacopuncture
    • /
    • v.18 no.2
    • /
    • pp.7-18
    • /
    • 2015
  • Objectives: Lawsone (1,4 naphthoquinone) is a non redox cycling compound that can be catalyzed by DT diaphorase (DTD) into 1,2,4-trihydroxynaphthalene (THN), which can generate reactive oxygen species by auto oxidation. The purpose of this study was to evaluate the toxicity of the phytomarker 1,4 naphthoquinone and its metabolite THN by using the molecular docking program AutoDock 4. Methods: The 3D structure of ligands such as hydrogen peroxide ($H_2O_2$), nitric oxide synthase (NOS), catalase (CAT), glutathione (GSH), glutathione reductase (GR), glucose 6-phosphate dehydrogenase (G6PDH) and nicotinamide adenine dinucleotide phosphate hydrogen (NADPH) were drawn using hyperchem drawing tools and minimizing the energy of all pdb files with the help of hyperchem by $MM^+$ followed by a semi-empirical (PM3) method. The docking process was studied with ligand molecules to identify suitable dockings at protein binding sites through annealing and genetic simulation algorithms. The program auto dock tools (ADT) was released as an extension suite to the python molecular viewer used to prepare proteins and ligands. Grids centered on active sites were obtained with spacings of $54{\times}55{\times}56$, and a grid spacing of 0.503 was calculated. Comparisons of Global and Local Search Methods in Drug Docking were adopted to determine parameters; a maximum number of 250,000 energy evaluations, a maximum number of generations of 27,000, and mutation and crossover rates of 0.02 and 0.8 were used. The number of docking runs was set to 10. Results: Lawsone and THN can be considered to efficiently bind with NOS, CAT, GSH, GR, G6PDH and NADPH, which has been confirmed through hydrogen bond affinity with the respective amino acids. Conclusion: Naphthoquinone derivatives of lawsone, which can be metabolized into THN by a catalyst DTD, were examined. Lawsone and THN were found to be identically potent molecules for their affinities for selected proteins.

Hybrid Machine Learning Model for Predicting the Direction of KOSPI Securities (코스피 방향 예측을 위한 하이브리드 머신러닝 모델)

  • Hwang, Heesoo
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.6
    • /
    • pp.9-16
    • /
    • 2021
  • In the past, there have been various studies on predicting the stock market by machine learning techniques using stock price data and financial big data. As stock index ETFs that can be traded through HTS and MTS are created, research on predicting stock indices has recently attracted attention. In this paper, machine learning models for KOSPI's up and down predictions are implemented separately. These models are optimized through a grid search of their control parameters. In addition, a hybrid machine learning model that combines individual models is proposed to improve the precision and increase the ETF trading return. The performance of the predictiion models is evaluated by the accuracy and the precision that determines the ETF trading return. The accuracy and precision of the hybrid up prediction model are 72.1 % and 63.8 %, and those of the down prediction model are 79.8% and 64.3%. The precision of the hybrid down prediction model is improved by at least 14.3 % and at most 20.5 %. The hybrid up and down prediction models show an ETF trading return of 10.49%, and 25.91%, respectively. Trading inverse×2 and leverage ETF can increase the return by 1.5 to 2 times. Further research on a down prediction machine learning model is expected to increase the rate of return.

Development of an Input File Preparation Tool for Offline Coupling of DNDC and DSSAT Models (DNDC 지역별 구동을 위한 입력자료 생성 도구 개발)

  • Hyun, Shinwoo;Hwang, Woosung;You, Heejin;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.1
    • /
    • pp.68-81
    • /
    • 2021
  • The agricultural ecosystem is one of the major sources of greenhouse gas (GHG) emissions. In order to search for climate change adaptation options which mitigate GHG emissions while maintaining crop yield, it is advantageous to integrate multiple models at a high spatial resolution. The objective of this study was to develop a tool to support integrated assessment of climate change impact b y coupling the DSSAT model and the DNDC model. DNDC Regional Input File Tool(DRIFT) was developed to prepare input data for the regional mode of DNDC model using input data and output data of the DSSAT model. In a case study, GHG emissions under the climate change conditions were simulated using the input data prepared b y the DRIFT. The time to prepare the input data was increased b y increasing the number of grid points. Most of the process took a relatively short time, while it took most of the time to convert the daily flood depth data of the DSSAT model to the flood period of the DNDC model. Still, processing a large amount of data would require a long time, which could be reduced by parallelizing some calculation processes. Expanding the DRIFT to other models would help reduce the time required to prepare input data for the models.

The Fault Diagnosis Model of Ship Fuel System Equipment Reflecting Time Dependency in Conv1D Algorithm Based on the Convolution Network (합성곱 네트워크 기반의 Conv1D 알고리즘에서 시간 종속성을 반영한 선박 연료계통 장비의 고장 진단 모델)

  • Kim, Hyung-Jin;Kim, Kwang-Sik;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.4
    • /
    • pp.367-374
    • /
    • 2022
  • The purpose of this study was to propose a deep learning algorithm that applies to the fault diagnosis of fuel pumps and purifiers of autonomous ships. A deep learning algorithm reflecting the time dependence of the measured signal was configured, and the failure pattern was trained using the vibration signal, measured in the equipment's regular operation and failure state. Considering the sequential time-dependence of deterioration implied in the vibration signal, this study adopts Conv1D with sliding window computation for fault detection. The time dependence was also reflected, by transferring the measured signal from two-dimensional to three-dimensional. Additionally, the optimal values of the hyper-parameters of the Conv1D model were determined, using the grid search technique. Finally, the results show that the proposed data preprocessing method as well as the Conv1D model, can reflect the sequential dependency between the fault and its effect on the measured signal, and appropriately perform anomaly as well as failure detection, of the equipment chosen for application.

Stakeholder Awareness of Rural Spatial Planning Data Utilization Based on Survey (농촌공간계획 데이터 수급에 대한 이해당사자 인식조사)

  • Zaewoong Rhee;Sang-Hyun Lee;Sungyun Lee;Jinsung Kim;Rui Qu;Seung-Jong Bae;Soo-Jin Kim;Sangbum Kim
    • Journal of Korean Society of Rural Planning
    • /
    • v.29 no.3
    • /
    • pp.25-37
    • /
    • 2023
  • According to the 「Rural Spatial Reconstruction and Regeneration Support Act」, enacted on March 29, 2024, all local governments are required to establish a 'Rural Spatial Reconstruction and Regeneration Plan' (hereinafter referred to as the 'Rural Spatial Plan'). In order for the 'Rural Spatial Plan' to be appropriately established, this study analyzed the supply and demand of spatial data from the perspective of user stakeholders and derived implications for improving rural spatial planning data utilization. In conclusion, three key recommendations come from this result. Firstly, it is necessary to establish an integrated DB for rural spatial planning data. This can solve the problem of low awareness of scattered data-providing websites, reduce the processing time of non-GIS data, and reduce the time required to acquire data by securing the availability of data search and download. In particular, research should be conducted on the establishment of a spatial analysis simulation system to support stakeholders' decision-making, considering that many stakeholders have difficulty in spatial analysis because spatial analysis techniques were not actively used in rural projects before the implementation of the rural agreement system in 2020. Secondly, research on how to improve data acquisition should be conducted in each data sector. The data sector group with the lowest ease of receiving are 'Local Community Domain', 'Changes in Domestic and International Conditions', and 'Provision and Utilization of Daily Life Services'. Lastly, in-depth research is needed on how to raise each rural spatial planning data supply stakeholder to the position of player. Stakeholders of 'University Institutions' and 'Public Enterprises and Research Institutes' should give those who participate in the formulation of rural spatial plans access to the raw data collected for public work. Stakeholders of 'Private company' need to come up with realistic measures to build a data pool centered on consultative bodies between existing private companies and then prepare a step-by-step strategy to fully open it by participating various stakeholders. In order to induce 'Village Residents and Associations' stakeholders to play a leading role as owners and producers of data, personnel should be trained to collect and record data related to the village. In addition, support measures should be prepared to continue these activities.

A Study on the Prediction of Uniaxial Compressive Strength Classification Using Slurry TBM Data and Random Forest (이수식 TBM 데이터와 랜덤포레스트를 이용한 일축압축강도 분류 예측에 관한 연구)

  • Tae-Ho Kang;Soon-Wook Choi;Chulho Lee;Soo-Ho Chang
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.547-560
    • /
    • 2023
  • Recently, research on predicting ground classification using machine learning techniques, TBM excavation data, and ground data is increasing. In this study, a multi-classification prediction study for uniaxial compressive strength (UCS) was conducted by applying random forest model based on a decision tree among machine learning techniques widely used in various fields to machine data and ground data acquired at three slurry shield TBM sites. For the classification prediction, the training and test data were divided into 7:3, and a grid search including 5-fold cross-validation was used to select the optimal parameter. As a result of classification learning for UCS using a random forest, the accuracy of the multi-classification prediction model was found to be high at both 0.983 and 0.982 in the training set and the test set, respectively. However, due to the imbalance in data distribution between classes, the recall was evaluated low in class 4. It is judged that additional research is needed to increase the amount of measured data of UCS acquired in various sites.

Location Service Modeling of Distributed GIS for Replication Geospatial Information Object Management (중복 지리정보 객체 관리를 위한 분산 지리정보 시스템의 위치 서비스 모델링)

  • Jeong, Chang-Won;Lee, Won-Jung;Lee, Jae-Wan;Joo, Su-Chong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.985-996
    • /
    • 2006
  • As the internet technologies develop, the geographic information system environment is changing to the web-based service. Since geospatial information of the existing Web-GIS services were developed independently, there is no interoperability to support diverse map formats. In spite of the same geospatial information object it can be used for various proposes that is duplicated in GIS separately. It needs intelligent strategies for optimal replica selection, which is identification of replication geospatial information objects. And for management of replication objects, OMG, GLOBE and GRID computing suggested related frameworks. But these researches are not thorough going enough in case of geospatial information object. This paper presents a model of location service, which is supported for optimal selection among replication and management of replication objects. It is consist of tree main services. The first is binding service which can save names and properties of object defined by users according to service offers and enable clients to search them on the service of offers. The second is location service which can manage location information with contact records. And obtains performance information by the Load Sharing Facility on system independently with contact address. The third is intelligent selection service which can obtain basic/performance information from the binding service/location service and provide both faster access and better performance characteristics by rules as intelligent model based on rough sets. For the validity of location service model, this research presents the processes of location service execution with Graphic User Interface.