• Title/Summary/Keyword: 탐지 효과도 분석

Search Result 490, Processing Time 0.037 seconds

Detection of the gas-saturated zone by spectral decomposition using Wigner-Ville distribution for a thin layer reservoir (얇은 저류층 내에서 WVD 빛띠 분해에 의한 가스 포화 구역 탐지)

  • Shin, Sung-Il;Byun, Joong-Moo
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.1
    • /
    • pp.39-46
    • /
    • 2012
  • Recently, stratigraphic reservoirs are getting more attention than structural reservoirs which have mostly developed. However, recognizing stratigraphic thin gas reservoirs in a stacked section is usually difficult because of tuning effects. Moreover, if the reflections from the brine-saturated region of a thin layer have the same polarity with those from the gas-saturated region, we could not easily identify the gas reservoir with conventional data processing technique. In this study, we introduced a way to delineate the gas-saturated region in a thin layer reservoir using a spectral decomposition method. First of all, amplitude spectrum with the variation of the frequency and the incident angle was investigated for the medium which represents property of Class 3, Class 1 or Class 4 AVO response. The results show that the maximum difference in the amplitude spectra between brine and gas-saturated thin layers occurs around the peak frequency independent of the incident angle and the type of AVO responses. In addition, the amplitude spectra of the gas-saturated zone are greater than those of brine-saturated one in Class 3 and Class 4 at the peak frequency while those of phenomenon occur oppositely in Class 1. Based on the results, we applied spectral decomposition method to the stacked section in order to distinguish the gas-saturated zone from the brine-saturated zone in a thin layer reservoir. To verify our new method, we constructed a thin-layer velocity model which contains both gas and brine-saturated zones which have the same reflection polarities. As a result, in the spectral decomposed sections near the peak frequency obtained by Wigner-Ville Distribution (WVD), we could identify the difference between reflections from gas- and brinesaturated region in the thin layer reservoir, which was hardly distinguishable in the stacked section.

The Characteristics Asian Dust Observed in Japan Deflecting the Korean Peninsula (2010. 5. 22.-5. 25.) (한반도를 돌아 일본에서 관측된 황사의 특징 (2010년 5월 22일-5월 25일))

  • Ahn, Bo-Young;Chun, Young-Sin
    • Journal of the Korean earth science society
    • /
    • v.32 no.4
    • /
    • pp.388-401
    • /
    • 2011
  • Asian dust was observed a total of 66 times in the springtime during the period from 2002 to 2010, with 26 cases in March, 23 cases in April and 17 cases in May. This study investigates a Asian dust episode that occurred during the period from 22 to 25 May 2010, based on synoptic weather patterns, wind vector at 850 hPa, relative humidity at 1000 hPa, Jet streams and wind vector at 300 hPa, PM10 concentration in Korea and satellite imagery. In this case, Asian dust originated on 22 May along the rear of a developing low pressure system in Mongolia. The Asian dust was then transported southeastward and bypassed the Korea peninsula from 23 to 24 May, before reaching Japan on 25 May. Jet streams on 24 May bypassed the Korean peninsula and induced the development of a surface low pressure centered over the peninsula. The resulting air flow was critical to the trajectory of the Asian dust, which likewise bypassed the Korean peninsula. 72-hour backward trajectory data reveal that the Shandong Peninsula and the East China Sea were the points of origin for the air flows that swept through the Japanese sites where Asian dust was observable to the naked eay. The Asian dust pathway is ascertained by horizontal distribution of the Asian dust of RGB imagery from MODIS satellites which captured the Asian dust moving over the Shandong Peninsula, the East China Sea, and northwest of the Kyushu region in Japan. Since the synoptic pattern and the transport way of the Asian dust case are far from typical ones, which Asian dust forecasting technique has long been based on, this study can be good example of exceptional Asian dust pattern and it will be used for more accurate Asian dust forecasting.

Regional Characteristics of Global Warming: Linear Projection for the Timing of Unprecedented Climate (지구온난화의 지역적 특성: 전례 없는 기후 시기에 대한 선형 전망)

  • SHIN, HO-JEONG;JANG, CHAN JOO
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.21 no.2
    • /
    • pp.49-57
    • /
    • 2016
  • Even if an external forcing that will drive a climate change is given uniformly over the globe, the corresponding climate change and the feedbacks by the climate system differ by region. Thus the detection of global warming signal has been made on a regional scale as well as on a global average against the internal variabilities and other noises involved in the climate change. The purpose of this study is to estimate a timing of unprecedented climate due to global warming and to analyze the regional differences in the estimated results. For this purpose, unlike previous studies that used climate simulation data, we used an observational dataset to estimate a magnitude of internal variability and a future temperature change. We calculated a linear trend in surface temperature using a historical temperature record from 1880 to 2014 and a magnitude of internal variability as the largest temperature displacement from the linear trend. A timing of unprecedented climate was defined as the first year when a predicted minimum temperature exceeds the maximum temperature record in a historical data and remains as such since then. Presumed that the linear trend and the maximum displacement will be maintained in the future, an unprecedented climate over the land would come within 200 years from now in the western area of Africa, the low latitudes including India and the southern part of Arabian Peninsula in Eurasia, the high latitudes including Greenland and the mid-western part of Canada in North America, the low latitudes including Amazon in South America, the areas surrounding the Ross Sea in Antarctica, and parts of East Asia including Korean Peninsula. On the other hand, an unprecedented climate would come later after 400 years in the high latitudes of Eurasia including the northern Europe, the middle and southern parts of North America including the U.S.A. and Mexico. For the ocean, an unprecedented climate would come within 200 years over the Indian Ocean, the middle latitudes of the North Atlantic and the South Atlantic, parts of the Southern Ocean, the Antarctic Ross Sea, and parts of the Arctic Sea. In the meantime, an unprecedented climate would come even after thousands of years over some other regions of ocean including the eastern tropical Pacific and the North Pacific middle latitudes where an internal variability is large. In summary, spatial pattern in timing of unprecedented climate are different for each continent. For the ocean, it is highly affected by large internal variability except for the high-latitude regions with a significant warming trend. As such, a timing of an unprecedented climate would not be uniform over the globe but considerably different by region. Our results suggest that it is necessary to consider an internal variability as well as a regional warming rate when planning a climate change mitigation and adaption policy.

Anisotrpic radar crosshole tomography and its applications (이방성 레이다 시추공 토모그래피와 그 응용)

  • Kim Jung-Ho;Cho Seong-Jun;Yi Myeong-Jong
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2005.09a
    • /
    • pp.21-36
    • /
    • 2005
  • Although the main geology of Korea consists of granite and gneiss, it Is not uncommon to encounter anisotropy Phenomena in crosshole radar tomography even when the basement is crystalline rock. To solve the anisotropy Problem, we have developed and continuously upgraded an anisotropic inversion algorithm assuming a heterogeneous elliptic anisotropy to reconstruct three kinds of tomograms: tomograms of maximum and minimum velocities, and of the direction of the symmetry axis. In this paper, we discuss the developed algorithm and introduce some case histories on the application of anisotropic radar tomography in Korea. The first two case histories were conducted for the construction of infrastructure, and their main objective was to locate cavities in limestone. The last two were performed In a granite and gneiss area. The anisotropy in the granite area was caused by fine fissures aligned in the same direction, while that in the gneiss and limestone area by the alignment of the constituent minerals. Through these case histories we showed that the anisotropic characteristic itself gives us additional important information for understanding the internal status of basement rock. In particular, the anisotropy ratio defined by the normalized difference between maximum and minimum velocities as well as the direction of maximum velocity are helpful to interpret the borehole radar tomogram.

  • PDF

Studies on Estimation of Fish Abundance Using an Echo Sounder ( 1 ) - Experimental Verification of the Theory for Estimating Fish Density- (어군탐지기에 의한 어군량 추정에 관한 기초적 연구 ( 1 ) - 어군량추정이론의 검증실험 -)

  • 이대재
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.27 no.1
    • /
    • pp.1-12
    • /
    • 1991
  • An experiment has been carefully designed and performed to verify the theory for the echointergration technique of estimating the density of fish school by the use of steel spheres in a laboratory tank. The spheres used to simulate a fish school were randomly distributed throughout the insonified volume to produce the acoustic echoes similar to those scattered from real fish schools. The backscattered echoes were measured as a function of target density at tow frequencies of 50kHz and 200kHz. Data acquisition, processing and analysis were performed by means of the microcomputer-based sonar-echo processor including a FFT analyzer. Acoustic scattering characteristics of a 36cm mackerel was investigated by measuring fish echoes with frequencies ranging from 47.8kHz to 52.0kHz. The fluctuation of bottom echoes caused by the effects of fish-school attenuation and multiple scattering which occurred in dense aggregations of fishes was also examined by analyzing the echograms of sardine schools obtained by a 50kHz telesounder in the set-net's bagnet, and the echograms obtained by a scientific echo sounder of 50kHz in the East China Sea, respectively. The results obtained can be summarized as follows: 1. The measured and the calculated echo shapes on the steel sphere used to simulate a fish school were in close agreement. 2. The waveform and amplitude of echo signals by a mackerel without swimbladder fluctuated irregularly with the measuring frequency. 3. When a collection of 30 targets/m super(3) lied the shadow region behind another collection of 5 targets/m super(3), the mean losses in echo energy for the 30 targets/m super(3) were about -0.4dB at 50kHz and about -0.2dB at 200kHz, respectively. 4. In the echograms obtained in the East China Sea, the bottom echoes fluctuated remarkably when the dense aggregations of fish appeared between transducer and seabed. Especially, in the case of the echograms of sardine school obtained in a set-net's bagnet, the disappearance of bottom echoes and the lengthening of the echo trace by fish aggregations were observed. Then the mean density of the sardine school was estimated as 36 fish/m super(3). It suggests that when the distribution density of fishes in oceans is greater than this density, the effects of fish-school attenuation and multiple scattering must be taken into account as a possible source of error in fish abundance estimates. 5. The relationship between mean backscattering strength (, dB) and target density ($\rho$, No./m super(3)) were expressed by the equations: =-46.2+13.7 Log($\rho$) at 50kHz and =-43.9+13.4 Log($\rho$) at 200kHz. 6. The difference between the experimentally derived number and the actual number of targets gradually decreased with an increase in the target density and was within 20% when the density was 30 targets/m super(3). From these results, we concluded that when the number of targets in the insonified volume is large, the validity of the echo-integration technique of estimating the density of fish schools could be expected.

  • PDF

Development of Acquisition and Analysis System of Radar Information for Small Inshore and Coastal Fishing Vessels - Suppression of Radar Clutter by CFAR - (연근해 소형 어선의 레이더 정보 수록 및 해석 시스템 개발 - CFAR에 의한 레이더 잡음 억제 -)

  • 이대재;김광식;신형일;변덕수
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.347-357
    • /
    • 2003
  • This paper describes on the suppression of sea clutter on marine radar display using a cell-averaging CFAR(constant false alarm rate) technique, and on the analysis of radar echo signal data in relation to the estimation of ARPA functions and the detection of the shadow effect in clutter returns. The echo signal was measured using a X -band radar, that is located on the Pukyong National University, with a horizontal beamwidth of $$3.9^{\circ}$$, a vertical beamwidth of $20^{\circ}$, pulsewidth of $0.8 {\mu}s$ and a transmitted peak power of 4 ㎾ The suppression performance of sea clutter was investigated for the probability of false alarm between $l0-^0.25;and; 10^-1.0$. Also the performance of cell averaging CFAR was compared with that of ideal fixed threshold. The motion vectors and trajectory of ships was extracted and the shadow effect in clutter returns was analyzed. The results obtained are summarized as follows;1. The ARPA plotting results and motion vectors for acquired targets extracted by analyzing the echo signal data were displayed on the PC based radar system and the continuous trajectory of ships was tracked in real time. 2. To suppress the sea clutter under noisy environment, a cell averaging CFAR processor having total CFAR window of 47 samples(20+20 reference cells, 3+3 guard cells and the cell under test) was designed. On a particular data set acquired at Suyong Man, Busan, Korea, when the probability of false alarm applied to the designed cell averaging CFAR processor was 10$^{-0}$.75/ the suppression performance of radar clutter was significantly improved. The results obtained suggest that the designed cell averaging CFAR processor was very effective in uniform clutter environments. 3. It is concluded that the cell averaging CF AR may be able to give a considerable improvement in suppression performance of uniform sea clutter compared to the ideal fixed threshold. 4. The effective height of target, that was estimated by analyzing the shadow effect in clutter returns for a number of range bins behind the target as seen from the radar antenna, was approximately 1.2 m and the information for this height can be used to extract the shape parameter of tracked target..

A Study on Defense and Attack Model for Cyber Command Control System based Cyber Kill Chain (사이버 킬체인 기반 사이버 지휘통제체계 방어 및 공격 모델 연구)

  • Lee, Jung-Sik;Cho, Sung-Young;Oh, Heang-Rok;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.41-50
    • /
    • 2021
  • Cyber Kill Chain is derived from Kill chain of traditional military terms. Kill chain means "a continuous and cyclical process from detection to destruction of military targets requiring destruction, or dividing it into several distinct actions." The kill chain has evolved the existing operational procedures to effectively deal with time-limited emergency targets that require immediate response due to changes in location and increased risk, such as nuclear weapons and missiles. It began with the military concept of incapacitating the attacker's intended purpose by preventing it from functioning at any one stage of the process of reaching it. Thus the basic concept of the cyber kill chain is that the attack performed by a cyber attacker consists of each stage, and the cyber attacker can achieve the attack goal only when each stage is successfully performed, and from a defense point of view, each stage is detailed. It is believed that if a response procedure is prepared and responded, the chain of attacks is broken, and the attack of the attacker can be neutralized or delayed. Also, from the point of view of an attack, if a specific response procedure is prepared at each stage, the chain of attacks can be successful and the target of the attack can be neutralized. The cyber command and control system is a system that is applied to both defense and attack, and should present defensive countermeasures and offensive countermeasures to neutralize the enemy's kill chain during defense, and each step-by-step procedure to neutralize the enemy when attacking. Therefore, thist paper proposed a cyber kill chain model from the perspective of defense and attack of the cyber command and control system, and also researched and presented the threat classification/analysis/prediction framework of the cyber command and control system from the defense aspect

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Effective Geophysical Methods in Detecting Subsurface Caves: On the Case of Manjang Cave, Cheju Island (지하 동굴 탐지에 효율적인 지구물리탐사기법 연구: 제주도 만장굴을 대상으로)

  • Kwon, Byung-Doo;Lee, Heui-Soon;Lee, Gyu-Ho;Rim, Hyoung-Rea;Oh, Seok-Hoon
    • Journal of the Korean earth science society
    • /
    • v.21 no.4
    • /
    • pp.408-422
    • /
    • 2000
  • Multiple geophysical methods were applied over the Manjang cave area in Cheju Island to compare and contrast the effectiveness of each method for exploration of underground cavities. The used methods are gravity, magnetic, electrical resistivity and GPR(Ground Pentrating Radar) survey, of which instruments are portable and operations are relatively economical. We have chosen seven survey lines and applied appropriate multiple surveys depending on the field conditions. In the case of magnetic method. two-dimensional grid-type surveys were carried out to cover the survey area. The geophysical survey results reveal the characteristic responses of each method relatively well. Among the applied methods, the electric resistivity methods appeared to be the most effective ones in detecting the Manjang Cave and surrounding miscellaneous cavities. Especially, on the inverted resistivity section obtained from the dipole-dipole array data, the two-dimensional distribution of high resistivity cavities are revealed well. The gravity and magnetic data are contaminated easily by various noises and do not show the definitive responses enough to locate and delineate the Manjang cave. But they provide useful information in verifying the dipole-dipole resistivity survey results. The grid-type 2-D magnetic survey data show the trend of cave development well, and it may be used as a reconnaissance regional survey for determining survey lines for further detailed explorations. The GPR data show very sensitive response to the various shallow volcanic structures such as thin spaces between lava flows and small cavities, so we cannot identify the response of the main cave. Although each geophysical method provides its own useful information, the integrated interpretation of multiple survey data is most effective for investigation of the underground caves.

  • PDF