• Title/Summary/Keyword: 밀도기반

Search Result 968, Processing Time 0.032 seconds

A Study on the Gravity Anomaly of Okcheon Group based on the Gravity Measurement around Chung Lake (충주호 주변의 중력 측정에 의한 옥천계의 중력이상 연구)

  • Park, Jong-Oh;Song, Moo-Young
    • Journal of the Korean earth science society
    • /
    • v.32 no.1
    • /
    • pp.12-20
    • /
    • 2011
  • The gravity measurement was conducted at 256 stations around Chungju Lake to study subsurface geological distributions and subterranean mass discontinuities by the results of gravity anomaly in Metamorphic Complex, Okcheon Group, Great Limestone Group of Choson Supergroup, and Cretaceous biotite granites. Okcheon Group showed a high Bouguer gravity anomaly while Great Limestone Group of Choson Supergroup relatively a low anomaly. The mean depth of subterranean mass discontinuities is about 2.0 km and downward along the Suchangri Formation from the Hwanggangri and Moonjuri formations. In general, Okcheon Group appeared shallower than the depth of Great Limestone Group of Choson Supergroup when imaging the subterranean mass discontinuities from the Bouguer gravity anomaly.

CACHE:Context-aware Clustering Hierarchy and Energy efficient for MANET (CACHE:상황인식 기반의 계층적 클러스터링 알고리즘에 관한 연구)

  • Mun, Chang-min;Lee, Kang-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.571-573
    • /
    • 2009
  • Mobile Ad-hoc Network(MANET) needs efficient node management because the wireless network has energy constraints. Mobility of MANET would require the topology change frequently compared with a static network. To improve the routing protocol in MANET, energy efficient routing protocol would be required as well as considering the mobility would be needed. Previously proposed a hybrid routing CACH prolong the network lifetime and decrease latency. However the algorithm has a problem when node density is increase. In this paper, we propose a new method that the CACHE(Context-aware Clustering Hierarchy and Energy efficient) algorithm. The proposed analysis could not only help in defining the optimum depth of hierarchy architecture CACH utilize, but also improve the problem about node density.

  • PDF

Uncertainty analysis of grid-based distributed rainfall data on Mod-Clark model parameter estimation (격자기반 분포형 강우자료가 Mod-Clark 모형 매개변수 추정에 미치는 불확실성 분석)

  • Jeonghoon Lee;Jeongeun Won;Jiyu Seo;Sangdan Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.347-347
    • /
    • 2023
  • 홍수 예·경보 시에는 시간-단위 또는 그 이하의 시간 척도에서 작용하는 강우에 대한 고도의 영향이 중요하게 되며, 특히 상대적으로 더 드문 관측 밀도가 있는 산악지역에서 강우의 공간분포에 대한 산악 효과의 중요도가 더 높아지게 된다. 일반적으로 1시간 시간스케일에서 강우-고도의 관계를 살펴보기 위해서는 대략 5km 내외의 관측 밀도를 가져야 하는 것으로 알려져 있으나 이러한 지역은 매우 드물다. 최근 기상 예측 수치모델로부터 모의된 강우량의 품질이 눈에 띄게 향상됨에 따라 국내에도 다양한 연구가 수행된 바 있다. 본 연구에서는 WRF를 이용하여 남강댐 지역의 과거 호우 사상을 재현한 후, 이로부터 생산된 공간적인 강우장을 이용하여 시간-단위의 시간 척도에서 강우량과 고도 사이의 관계를 고려할 수 있는 WREPN(WRF Rainfall-Elevation Parameterized Nowcasting) 모형을 제안한다. 홍수량 분석을 위해 WREPN 모형을 이용하였으며, 비교군으로 실무적으로 많이 사용되는 IDW, Kriging 기반 격자강우가 사용되었다. 격자기반 분포형 강우자료로부터 홍수량을 분석하기 위해 Mod-Clark 모형이 적용되었으며, 입력된 강우자료별매개변수의 불확실성을 분석하기 위해 베이지안 기법이 적용되었다. 매개변수의 불확실성 분석으로부터 강우-고도 관계가 고려된 WREPN 모형의 강우자료가 상대적으로 불확실성이 낮다는 것을 확인할 수 있었다.

  • PDF

A Bottom-up Algorithm to Find the Densest Subgraphs Based on MapReduce (맵리듀스 기반 상향식 최대 밀도 부분그래프 탐색 알고리즘)

  • Lee, Woonghee;Kim, Younghoon
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.78-83
    • /
    • 2017
  • Finding the densest subgraphs from social networks, such that people in the subgraph are in a particular community or have common interests, has been a recurring problem in numerous studies undertaken. However, these algorithms focused only on finding the single densest subgraph. We suggest a heuristic algorithm of the bottom-up type, which finds the densest subgraph by increasing its size from a given starting node, with the repeated addition of adjacent nodes with the maximum degree. Furthermore, since this approach matches well with parallel processing, we further implement a parallel algorithm on the MapReduce framework. In experiments using various graph data, we confirmed that the proposed algorithm finds the densest subgraphs in fewer steps, as compared to other related studies. It also scales efficiently for many given starting nodes.

An Algorithm of Score Function Generation using Convolution-FFT in Independent Component Analysis (독립성분분석에서 Convolution-FFT을 이용한 효율적인 점수함수의 생성 알고리즘)

  • Kim Woong-Myung;Lee Hyon-Soo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.27-34
    • /
    • 2006
  • In this study, we propose this new algorithm that generates score function in ICA(Independent Component Analysis) using entropy theory. To generate score function, estimation of probability density function about original signals are certainly necessary and density function should be differentiated. Therefore, we used kernel density estimation method in order to derive differential equation of score function by original signal. After changing formula to convolution form to increase speed of density estimation, we used FFT algorithm that can calculate convolution faster. Proposed score function generation method reduces the errors, it is density difference of recovered signals and originals signals. In the result of computer simulation, we estimate density function more similar to original signals compared with Extended Infomax and Fixed Point ICA in blind source separation problem and get improved performance at the SNR(Signal to Noise Ratio) between recovered signals and original signal.

The Dependence of CT Scanning Parameters on CT Number to Physical Density Conversion for CT Image Based Radiation Treatment Planning System (CT 영상기반 방사선치료계획시스템을 위한 CT수 대 물리적 밀도 변환에 관한 CT 스캐닝 매개변수의 의존성)

  • Baek, Min Gyu;Kim, Jong Eon
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.6
    • /
    • pp.501-508
    • /
    • 2017
  • The dependence of CT scanning parameters on the CT number to physical density conversion from the CT image of CT and CBCT electron density phantom acquired by the CT scanner using in radiotherapy were analyzed by experiment. The CT numbers were independent of the tube current product exposure time, slice thickness, filter of image reconstruction, field of view and volume of phantom. But the CT numbers were dependent on the tube voltage and cross section of phantom. As a result, for physical density range above 0, the maximum CT number difference observed at the tube voltage between 90 and 120 kVp was 27%, and the maximum CT number difference observed between CT body and head electron density phantom was 15%.

Estimation of Canopy Cover in Forest Using KOMPSAT-2 Satellite Images (KOMPSAT-2 위성영상을 이용한 산림의 수관 밀도 추정)

  • Chang, An-Jin;Kim, Yong-Min;Kim, Yong-Il;Lee, Byoung-Kil;Eo, Yan-Dam
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.1
    • /
    • pp.83-91
    • /
    • 2012
  • Crown density, which is defined as the proportion of the forest floor concealed by tree crown, is important and useful information in various fields. Previous methods of measuring crown density have estimated crown density by interpreting aerial photographs or through a ground survey. These are time-consuming, labor-intensive, expensive and inconsistent approaches, as they involve a great deal of subjectivity and rely on the experience of the interpreter. In this study, the crown density of a forest in Korea was estimated using KOMPSAT-2 high-resolution satellite images. Using the image segmentation technique and stand information of the digital forest map, the forest area was divided into zones. The crown density for each segment was determined using the discriminant analysis method and the forest ratio method. The results showed that the accuracy of the discriminant analysis method was about 60%, while the accuracy of the forest ratio method was about 85%. The probability of extraction of candidate to update was verified by comparing the result with the digital forest map.

Applying the L-index for Analyzing the Density of Point Features (점사상 밀도 분석을 위한 L-지표의 적용)

  • Lee, Byoung-Kil
    • Spatial Information Research
    • /
    • v.16 no.2
    • /
    • pp.237-247
    • /
    • 2008
  • Statistical analysis of the coordinate information is regarded as one of the major GIS functions. Among them, one of the most fundamental analysis is density analysis of point features. For analyzing the density appropriately, determining the search radius, kernel radius, has critical importance. In this study, using L-index, known as its usefulness for choosing the kernel radius in previous researches, radius for density analysis of various point features are estimated, and the behavior of L-index is studied based on the estimated results. As results, L-index is not suitable to determine the search radius for the point features that are evenly distributed with small clusters, because the pattern of the L-index is depends on the size of the study area. But for the point features with small number of highly clustered areas, L-index is suitable, because the pattern of the L-index is not affected by the size of study area.

  • PDF

Improved Density-Independent Fuzzy Clustering Using Regularization (레귤러라이제이션 기반 개선된 밀도 무관 퍼지 클러스터링)

  • Han, Soowhan;Heo, Gyeongyong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • Fuzzy clustering, represented by FCM(Fuzzy C-Means), is a simple and efficient clustering method. However, the object function in FCM makes clusters affect clustering results proportional to the density of clusters, which can distort clustering results due to density difference between clusters. One method to alleviate this density problem is EDI-FCM(Extended Density-Independent FCM), which adds additional terms to the objective function of FCM to compensate for the density difference. In this paper, proposed is an enhanced EDI-FCM using regularization, Regularized EDI-FCM. Regularization is commonly used to make a solution space smooth and an algorithm noise insensitive. In clustering, regularization can reduce the effect of a high-density cluster on clustering results. The proposed method converges quickly and accurately to real centers when compared with FCM and EDI-FCM, which can be verified with experimental results.

Determination of Density of Saturated Sand Considering Particle-fluid Interaction During Earthquake (입자-유체 상호거동을 고려한 지진시 포화 모래지반의 밀도 결정)

  • Kim, Hyun-Uk;Lee, Sei-Hyun;Youn, Jun-Ung
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.10
    • /
    • pp.41-48
    • /
    • 2022
  • The mass density of the medium (ρ) used to calculate the maximum shear modulus (Gmax) of the saturated ground based on the shear wave velocity is unclear. Therefore, to determine the mass density, a verification formula and five scenarios were established. Laboratory tests were conducted, and the obtained results were compared. The mass density of the medium was assumed to be saturated (ρsat), wet (ρt), dry (ρdry), and submerged conditions (ρsub), and the Vs ratios of saturated to dry condition were obtained from each case. Assuming the saturated density (ρsat), the Vs ratio was consistent with the value from the resonant column test (RCT) results, and the value from the bender element test results was consistent with the wet density assumption (ρt). Considering the frequency range of earthquakes, it is concluded that applying the saturated density (ρsat) is reasonable as in the RCT results.