Single-Channel Seismic Data Processing via Singular Spectrum Analysis (특이 스펙트럼 분석 기반 단일 채널 탄성파 자료처리 연구)
-
- Geophysics and Geophysical Exploration
- /
- v.27 no.2
- /
- pp.91-107
- /
- 2024
Single-channel seismic exploration has proven effective in delineating subsurface geological structures using small-scale survey systems. The seismic data acquired through zero- or near-offset methods directly capture subsurface features along the vertical axis, facilitating the construction of corresponding seismic sections. However, substantial noise in single-channel seismic data hampers precise interpretation because of the low signal-to-noise ratio. This study introduces a novel approach that integrate noise reduction and signal enhancement via matrix rank optimization to address this issue. Unlike conventional rank-reduction methods, which retain selected singular values to mitigate random noise, our method optimizes the entire singular value spectrum, thus effectively tackling both random and erratic noises commonly found in environments with low signal-to-noise ratio. Additionally, to enhance the horizontal continuity of seismic events and mitigate signal loss during noise reduction, we introduced an adaptive weighting factor computed from the eigenimage of the seismic section. To access the robustness of the proposed method, we conducted numerical experiments using single-channel Sparker seismic data from the Chukchi Plateau in the Arctic Ocean. The results demonstrated that the seismic sections had significantly improved signal-to-noise ratios and minimal signal loss. These advancements hold promise for enhancing single-channel and high-resolution seismic surveys and aiding in the identification of marine development and submarine geological hazards in domestic coastal areas.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
A transform-space index indexes objects represented as points in the transform space An advantage of a transform-space index is that optimization of join algorithms using these indexes becomes relatively simple. However, the disadvantage is that these algorithms cannot be applied to original-space indexes such as the R-tree. As a way of overcoming this disadvantages, the authors earlier proposed the transform-space view join algorithm that joins two original- space indexes in the transform space through the notion of the transform-space view. A transform-space view is a virtual transform-space index that allows us to perform join in the transform space using original-space indexes. In a transform-space view join algorithm, the order of accessing disk pages -for which various space filling curves could be used -makes a significant impact on the performance of joins. In this paper, we Propose a new space filling curve called the adaptive row major order (ARM order). The ARM order adaptively controls the order of accessing pages and significantly reduces the one-pass buffer size (the minimum buffer size required for guaranteeing one disk access per page) and the number of disk accesses for a given buffer size. Through analysis and experiments, we verify the excellence of the ARM order when used with the transform-space view join. The transform-space view join with the ARM order always outperforms existing ones in terms of both measures used: the one-pass buffer size and the number of disk accesses for a given buffer size. Compared to other conventional space filling curves used with the transform-space view join, it reduces the one-pass buffer size by up to 21.3 times and the number of disk accesses by up to
In February 2008, high storm waves due to a developed atmospheric low pressure system propagating from the west off Hokkaido, Japan, to the south and southwest throughout the East Sea (ES) caused extensive damages along the central coast of Japan and along the east coast of Korea. This study consists of two parts. In the first part, we estimate extreme storm wave characteristics in the Toyama Bay where heavy coastal damages occurred, using a non-hydrostatic meteorological model and a spectral wave model by considering the extreme conditions for two factors for wind wave growth, such as wind intensity and duration. The estimated extreme significant wave height and corresponding wave period were 6.78 m and 18.28 sec, respectively, at the Fushiki Toyama. In the second part, we perform numerical experiments on wave-structure interaction in the Fushiki Port, Toyama Bay, where the long North-Breakwater was heavily damaged by the storm waves in February 2008. The experiments are conducted using a non-linear shallow-water equation model with adaptive mesh refinement (AMR) and wet-dry scheme. The estimated extreme storm waves of 6.78 m and 18.28 sec are used for incident wave profile. The results show that the Fushiki Port would be overtopped and flooded by extreme storm waves if the North-Breakwater does not function properly after being damaged. Also the storm waves would overtop seawalls and sidewalls of the Manyou Pier behind the North-Breakwater. The results also depict that refined meshes by AMR method with wet-dry scheme applied capture the coastline and coastal structure well while keeping the computational load efficiently.
Objectives: Shift work is a stressful situation. It is important to know the factors associated with the ability to adapt to a shift work schedule. The aim of the present study was to investigate the association between sleep, as well as personality variables, and the resilience of shift work nurses. Method: Self-report questionnaires were administered to 95 nurses who worked in one national university hospital. Connor-Davidson resilience scale, hospital anxiety and depression scale, morningness-eveningness scale, Pittsburgh sleep quality index, other sleep-related questionnaires, and Korean defense style questionnaires were used. Results: Age, shift work duration, off-day oversleep, depression, anxiety, adaptive defense style, and self-suppressive defense style were significantly associated with resilience (p < 0.05). Multiple regression analysis showed that age (
In this paper, we propose an efficient method for improving visual quality of AR-FGS (Adaptive Reference FGS) which is adopted as a key scheme for SVC (Scalable Video Coding) or H.264 scalable extension. The standard FGS (Fine Granularity Scalability) adopts AR-FGS that introduces temporal prediction into FGS layer by using a high quality reference signal which is constructed by the weighted average between the base layer reconstructed imageand enhancement reference to improve the coding efficiency in the FGS layer. However, when the enhancement stream is truncated at certain bitstream position in transmission, the rest of the data of the FGS layer will not be available at the FGS decoder. Thus the most noticeable problem of using the enhancement layer in prediction is the degraded visual quality caused by drifting because of the mismatch between the reference frame used by the FGS encoder and that by the decoder. To solve this problem, we exploit the principle of cyclical block coding that is used to encode quantized transform coefficients in a cyclical manner in the FGS layer. Encoding block coefficients in a cyclical manner places 'higher-value' bits earlier in the bitstream. The quantized transform coefficients included in the ealry coding cycle of cyclical block coding have higher probability to be correctly received and decoded than the others included in the later cycle of the cyclical block coding. Therefore, we can minimize visual quality degradation caused by bitstream truncation by adjusting weighting factor to control the contribution of the bitstream produced in each coding cycle of cyclical block coding when constructing the enhancement layer reference frame. It is shown by simulations that the improved AR-FGS scheme outperforms the standard AR-FGS by about 1 dB in maximum in the reconstructed visual quality.
In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.
With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.
The aim of this research was to develop a climate change vulnerability index at the district level (Si, Gun, Gu) with respect to the health care sector in Korea. The climate change vulnerability index was esimated based on the four major causes of climate-related illnesses : vector, flood, heat waves, and air pollution/allergies. The vulnerability assessment framework consists of six layers, all of which are based on the IPCC vulnerability concepts (exposure, sensitivity, and adaptive capacity) and the pathway of direct and indirect impacts of climate change modulators on health. We collected proxy variables based on the conceptual framework of climate change vulnerability. Data were standardized using the min-max normalization method. We applied the analytic hierarchy process (AHP) weight and aggregated the variables using the non-compensatory multi-criteria approach. To verify the index, sensitivity analysis was conducted by using another aggregation method (geometric transformation method, which was applied to the index of multiple deprivation in the UK) and weight, calculated by the Budget Allocation method. The results showed that it would be possible to identify the vulnerable areas by applying the developed climate change vulnerability assessment index. The climate change vulnerability index could then be used as a valuable tool in setting climate change adaptation policies in the health care sector.