• Title/Summary/Keyword: Grid Search Method

Search Result 165, Processing Time 0.025 seconds

Spectral Inversion of Time-domain Induced Polarization Data (시간영역 유도분극 자료의 Cole-Cole 역산)

  • Kim, Yeon-Jung;Cho, In-Ky
    • Geophysics and Geophysical Exploration
    • /
    • v.24 no.4
    • /
    • pp.171-179
    • /
    • 2021
  • We outline a process for estimating Cole-Cole parameters from time-domain induced polarization (IP) data. The IP transients are all inverted to 2D Cole-Cole earth models that include resistivity, chargeability, relaxation time, and the frequency exponent. Our inversion algorithm consists of two stages. We first convert the measured voltage decay curves into time series of current-on time apparent resistivity to circumvent the negative chargeability problem. As a first step, a 4D inversion recovers the resistivity model at each time channel that increases monotonically with time. The desired intrinsic Cole-Cole parameters are then recovered by inverting the resistivity time series of each inversion block. In the second step, the Cole-Cole parameters can be estimated readily by setting the initial model close to the true value through a grid search method. Finally, through inversion procedures applied to synthetic data sets, we demonstrate that our algorithm can image the Cole-Cole earth models effectively.

Battery thermal runaway cell detection using DBSCAN and statistical validation algorithms (DBSCAN과 통계적 검증 알고리즘을 사용한 배터리 열폭주 셀 탐지)

  • Jingeun Kim;Yourim Yoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.569-582
    • /
    • 2023
  • Lead-acid Battery is the oldest rechargeable battery system and has maintained its position in the rechargeable battery field. The battery causes thermal runaway for various reasons, which can lead to major accidents. Therefore, preventing thermal runaway is a key part of the battery management system. Recently, research is underway to categorize thermal runaway battery cells into machine learning. In this paper, we present a thermal runaway hazard cell detection and verification algorithm using DBSCAN and statistical method. An experiment was conducted to classify thermal runaway hazard cells using only the resistance values as measured by the Battery Management System (BMS). The results demonstrated the efficacy of the proposed algorithms in accurately classifying thermal runaway cells. Furthermore, the proposed algorithm was able to classify thermal runaway cells between thermal runaway hazard cells and cells containing noise. Additionally, the thermal runaway hazard cells were early detected through the optimization of DBSCAN parameters using a grid search approach.

Locating Microseismic Events using a Single Vertical Well Data (단일 수직 관측정 자료를 이용한 미소진동 위치결정)

  • Kim, Dowan;Kim, Myungsun;Byun, Joongmoo;Seol, Soon Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.18 no.2
    • /
    • pp.64-73
    • /
    • 2015
  • Recently, hydraulic fracturing is used in various fields and microseismic monitoring is one of the best methods for judging where hydraulic fractures exist and how they are developing. When locating microseismic events using single vertical well data, distances from the vertical array and depths from the surface are generally decided using time differences between compressional (P) wave and shear (S) wave arrivals and azimuths are calculated using P wave hodogram analysis. However, in field data, it is sometimes hard to acquire P wave data which has smaller amplitude than S wave because microseismic data often have very low signal to noise (S/N) ratio. To overcome this problem, in this study, we developed a grid search algorithm which can find event location using all combinations of arrival times recorded at receivers. In addition, we introduced and analyzed the method which calculates azimuths using S wave. The tests of synthetic data show the inversion method using all combinations of arrival times and receivers can locate events without considering the origin time even using only single phase. In addition, the method can locate events with higher accuracy and has lower sensitivity on first arrival picking errors than conventional method. The method which calculates azimuths using S wave can provide reliable results when the dip between event and receiver is relatively small. However, this method shows the limitation when dip is greater than about $20^{\circ}$ in our model test.

Robust determination of control parameters in K chart with respect to data structures (데이터 구조에 강건한 K 관리도의 관리 모수 결정)

  • Park, Ingkeun;Lee, Sungim
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1353-1366
    • /
    • 2015
  • These days Shewhart control chart for evaluating stability of the process is widely used in various field. But it must follow strict assumption of distribution. In real-life problems, this assumption is often violated when many quality characteristics follow non-normal distribution. Moreover, it is more serious in multivariate quality characteristics. To overcome this problem, many researchers have studied the non-parametric control charts. Recently, SVDD (Support Vector Data Description) control chart based on RBF (Radial Basis Function) Kernel, which is called K-chart, determines description of data region on in-control process and is used in various field. But it is important to select kernel parameter or etc. in order to apply the K-chart and they must be predetermined. For this, many researchers use grid search for optimizing parameters. But it has some problems such as selecting search range, calculating cost and time, etc. In this paper, we research the efficiency of selecting parameter regions as data structure vary via simulation study and propose a new method for determining parameters so that it can be easily used and discuss a robust choice of parameters for various data structures. In addition, we apply it on the real example and evaluate its performance.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

Optimization of PRISM Parameters and Digital Elevation Model Resolution for Estimating the Spatial Distribution of Precipitation in South Korea (남한 강수량 분포 추정을 위한 PRISM 매개변수 및 수치표고모형 최적화)

  • Park, Jong-Chul;Jung, Il-Won;Chang, Hee-Jun;Kim, Man-Kyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.3
    • /
    • pp.36-51
    • /
    • 2012
  • The demand for a climatological dataset with a regular spaced grid is increasing in diverse fields such as ecological and hydrological modeling as well as regional climate impact studies. PRISM(Precipitation-Elevation Regressions on Independent Slopes Model) is a useful method to estimate high-altitude precipitation. However, it is not well discussed over the optimization of PRISM parameters and DEM(Digital Elevation Model) resolution in South Korea. This study developed the PRISM and then optimized parameters of the model and DEM resolution for producing a gridded annual average precipitation data of South Korea with 1km spatial resolution during the period 2000-2005. SCE-UA (Shuffled Complex Evolution-University of Arizona) method employed for the optimization. In addition, sensitivity analysis investigates the change in the model output with respect to the parameter and the DEM spatial resolution variations. The study result shows that maximum radius within which station search will be conducted is 67km. Minimum radius within which all stations are included is 31km. Minimum number of stations required for cell precipitation and elevation regression calculation is four. Optimizing DEM resolution is $1{\times}1km$. This study also shows that the PRISM output very sensitive to DEM spatial resolution variations. This study contributes to improving the accuracy of PRISM technique as it applies to South Korea.

Weighted Energy Detector for Detecting Uunknown Threat Signals in Electronic Warfare System in Weak Power Signal Environment (전자전 미약신호 환경에서 미상 위협 신호원의 검출 성능 향상을 위한 가중 에너지 검출 기법)

  • Kim, Dong-Gyu;Kim, Yo-Han;Lee, Yu-Ri;Jang, Chungsu;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.3
    • /
    • pp.639-648
    • /
    • 2017
  • Electronic warfare systems for extracting information of the threat signals can be employed under the circumstance where the power of the received signal is weak. To precisely and rapidly detect the threat signals, it is required to use methods exploiting whole energy of the received signals instead of conventional methods using a single received signal input. To utilize the whole energy, numerous sizes of windows need to be implemented in a detector for dealing with all possible unknown length of the received signal because it is assumed that there is no preliminary information of the uncooperative signals. However, this grid search method requires too large computational complexity to be practically implemented. In order to resolve this complexity problem, an approach that reduces the number of windows by selecting the smaller number of representative windows can be considered. However, each representative window in this approach needs to cover a certain amount of interval divided from the considering range. Consequently, the discordance between the length of the received signal and the window sizes results in degradation of the detection performance. Therefore, we propose the weighted energy detector which results in improved detection performance comparing with the conventional energy detector under circumstance where the window size is smaller than the length of the received signal. In addition, it is shown that the proposed method exhibits the same performance under other circumstances.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Elicitation of Collective Intelligence by Fuzzy Relational Methodology (퍼지관계 이론에 의한 집단지성의 도출)

  • Joo, Young-Do
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.17-35
    • /
    • 2011
  • The collective intelligence is a common-based production by the collaboration and competition of many peer individuals. In other words, it is the aggregation of individual intelligence to lead the wisdom of crowd. Recently, the utilization of the collective intelligence has become one of the emerging research areas, since it has been adopted as an important principle of web 2.0 to aim openness, sharing and participation. This paper introduces an approach to seek the collective intelligence by cognition of the relation and interaction among individual participants. It describes a methodology well-suited to evaluate individual intelligence in information retrieval and classification as an application field. The research investigates how to derive and represent such cognitive intelligence from individuals through the application of fuzzy relational theory to personal construct theory and knowledge grid technique. Crucial to this research is to implement formally and process interpretatively the cognitive knowledge of participants who makes the mutual relation and social interaction. What is needed is a technique to analyze cognitive intelligence structure in the form of Hasse diagram, which is an instantiation of this perceptive intelligence of human beings. The search for the collective intelligence requires a theory of similarity to deal with underlying problems; clustering of social subgroups of individuals through identification of individual intelligence and commonality among intelligence and then elicitation of collective intelligence to aggregate the congruence or sharing of all the participants of the entire group. Unlike standard approaches to similarity based on statistical techniques, the method presented employs a theory of fuzzy relational products with the related computational procedures to cover issues of similarity and dissimilarity.

Analysis of Characteristics of Seismic Source and Response Spectrum of Ground Motions from Recent Earthquake near the Backryoung Island (최근 백령도해역 발생지진의 지진원 및 응답스펙트럼 특성 분석)

  • Kim, Jun-Kyoung
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.4
    • /
    • pp.274-281
    • /
    • 2011
  • We analysed ground motions form Mw 4.3 earthquake around Backryoung Island for the seismic source focal mechanism and horizontal response spectrum. Focal mechanism of the Backryoung Islands area was compared to existing principal stress orientation of the Korean Peninsula and horizontal response spectrum was also compared to those of the US NRC Regulatory Guide (1.60) and the Korean National Building Code. The ground motions of 3 stations, including vertical, radial, and tangential components for each station, were used for grid search method of moment tensor seismic source. The principal stress orientation from this study, ENE-WSW, is consistent fairly well with that of the Korean Peninsula. The horizontal response spectrum using 30 observed ground motions analysed and then were compared to both the seismic design response spectra (Reg Guide 1.60), applied to the domestic nuclear power plants, and the Korean Standard Design Response Spectrum for general structures and buildings (1997). Response spectrum of 30 horizontal ground motions were used for normalization with respect to the peak acceleration value of each ground motion. The results showed that the horizontal response spectrum revealed higher values for frequency bands above 3 Hz than Reg. Guide (1.60). The results were also compared to the Korean Standard Response Spectrum for the 3 different soil types and showed that the vertical response spectra revealed higher values for the frequency bands below 0.8 second than the Korean Standard Response Spectrum (SD soil condition). However, through the qualitative improvements and quantitative enhancement of the observed ground motions, the conservation of horizontal seismic design response spectrum should be considered more significantly for the higher frequency bands.