• Title/Summary/Keyword: weighted transform

Search Result 151, Processing Time 0.027 seconds

A Study on Evaluation for the Han River Water Quality Index (한강의 수질지수 산정에 관한 연구)

  • 서정현
    • Water for future
    • /
    • v.14 no.3
    • /
    • pp.55-66
    • /
    • 1981
  • The theory and practice of water quality scoring and indexing are introduced. The monthly water analysis data are available for six stations long the down-stream Han River whthin the areal boundary of the Special City of Seoul. The data cover the period between 1975 and 1979 inclusive and contain the analytical findings on 37 water constituents including DO, BOD, temperature, total solids and etc. Sic parameters are selected form the 37 items, that, to the judgement of the writer, best reflect the water quality of the Han River. They are; dissolved oxggen saturation, pH, fecal coliform, total solids, BOD and nitrate+ammonia. For each of the six parameters, a subscore function is developed and graphically presented to facilitate the transform of a measurment of the arameter to a subscore on a common score(e.G. 0-100) The score of a sample is calculated as a fuction of the six subscores, using four different approaches; (1) the unweighted arithmetic water quality score, (2) the weighted arithmetic water quality score, (3)the unweighted multiplicative score and (4) the reduced (total) score. Independent of these calculated scores, the experts' score which is calculated by averaging the ratings of water quality experts is obtained and compared with each of the four calculated scores by means of the least square method. The experts' score compares most favorably with the "reduced" score with the correlation coefficient of 0.956 : therefore this method of water quality scoring is adopted to calculate the Han River water quality scores and indices. Water quality index data for Guiri, ukdo, Pokwangdong, Noryangjin, Yongdungpo and Kayang Stations, 1975-1979 are as follow: The overall water quality index data of the Han River between Guiri and Kayang Stations are found; 47.3 in 1976, 48.0 in 1977, 48.5 in 1978 and 54.7 in 1979, indicating the general trend towards water quality improvent in this part of the river, in terms of the increased water quality index by average 1.85 points per year during this period. Finally the optimum sampling frequencies distributed among the six stations, using an equation which takes into account the coefficients of variation of the water quality scores and indices arec calculated.alculated.

  • PDF

Detection of formation boundaries and permeable fractures based on frequency-domain Stoneley wave logs

  • Saito Hiroyuki;Hayashi Kazuo;Iikura Yoshikazu
    • Geophysics and Geophysical Exploration
    • /
    • v.7 no.1
    • /
    • pp.45-50
    • /
    • 2004
  • This paper describes a method of detecting formation boundaries, and permeable fractures, from frequency-domain Stoneley wave logs. Field data sets were collected between the depths of 330 and 360 m in well EE-4 in the Higashi-Hachimantai geothermal field, using a monopole acoustic logging tool with a source central frequency of 15 kHz. Stoneley wave amplitude spectra were calculated by performing a fast Fourier transform on the waveforms, and the spectra were then collected into a frequency-depth distribution of Stoneley wave amplitudes. The frequency-domain Stoneley wave log shows four main characteristic peaks at frequencies 6.5, 8.8, 12, and 13.3 kHz. The magnitudes of the Stoneley wave at these four frequencies are affected by formation properties. The Stoneley wave at higher frequencies (12 and 13.3 kHz) has higher amplitudes in hard formations than in soft formations, while the wave at lower frequencies (6.5 and 8.8 kHz) has higher amplitudes in soft formations than in hard formations. The correlation of the frequency-domain Stoneley wave log with the logs of lithology, degree of welding, and P-wave velocity is excellent, with all of them showing similar discontinuities at the depths of formation boundaries. It is obvious from these facts that the frequency-domain Stoneley wave log provides useful clues for detecting formation boundaries. The frequency-domain Stoneley wave logs are also applicable to the detection of a single permeable fracture. The procedure uses the Stoneley wave spectral amplitude logs at the four frequencies, and weighting functions. The optimally weighted sum of the four Stoneley wave spectral amplitudes becomes almost constant at all depths, except at the depth of a permeable fracture. The assumptions that underlie this procedure are that the energy of the Stoneley wave is conserved in continuous media, but that attenuation of the Stoneley wave may occur at a permeable fracture. This attenuation may take place at anyone of the four characteristic Stoneley wave frequencies. We think our multispectral approach is the only reliable method for the detection of permeable fractures.

Study on the Service Area Determination of the Public Facilities Applying Voronoi Diagrams - Case Study of the Fire Services in Gangnam-Gu, Seoul - (보로노이 다이어그램을 적용한 공공서비스의 관할구역 설정에 대한 연구 - 서울 강남 지역의 소방서를 사례로 하여 -)

  • Kim, Jae-Won;Kang, Jee-Hoon;Lee, Eui-Young;Kang, Yong-Jin
    • Spatial Information Research
    • /
    • v.15 no.3
    • /
    • pp.203-218
    • /
    • 2007
  • The purpose of this article is to set up the scientific and reasonable norm of location and service area determination instead of the pro-administrative lacking availability, so as to propose more practical and reasonable standard of space unit for the location of facilities. This article has accepted the method of Voronoi Diagram as a new scientific and reasonable criteria. The article chooses and realizes a model that can propose a new service area, transform and apply to improve its reality, and assesses which has more reality and compatibility by comparing the models. The result from this procedure can be adapted in objectification of the service area determination and formation of the standard space unit.

  • PDF

GA-based Normalization Approach in Back-propagation Neural Network for Bankruptcy Prediction Modeling (유전자알고리즘을 기반으로 하는 정규화 기법에 관한 연구 : 역전파 알고리즘을 이용한 부도예측 모형을 중심으로)

  • Tai, Qiu-Yue;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.1-14
    • /
    • 2010
  • The back-propagation neural network (BPN) has long been successfully applied in bankruptcy prediction problems. Despite its wide application, some major issues must be considered before its use, such as the network topology, learning parameters and normalization methods for the input and output vectors. Previous studies on bankruptcy prediction with BPN have shown that many researchers are interested in how to optimize the network topology and learning parameters to improve the prediction performance. In many cases, however, the benefits of data normalization are often overlooked. In this study, a genetic algorithm (GA)-based normalization transform, which is defined as a linearly weighted combination of several different normalization transforms, will be proposed. GA is used to extract the optimal weight for the generalization. From the results of an experiment, the proposed method was evaluated and compared with other methods to demonstrate the advantage of the proposed method.

Optimal Design of a Continuous Time Deadbeat Controller (연속시간 유한정정제어기의 최적설계)

  • Kim Seung Youal;Lee Keum Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.1 no.2
    • /
    • pp.169-176
    • /
    • 2000
  • Deadbeat property is well established in digital control system design in time domain. But in continuous time system, deadbeat is impossible because of it's ripples between sampling points inspite of designs using the related digital control system design theory. But several researchers suggested delay elements. A delay element is made from the concept of finite Laplace Transform. From some specifications such as internal model stability, physical realizations as well as finite time settling, unknown coefficents and poles in error transfer functions with delay elements can be calulted so as to satisfy these specifications. For the application to the real system, robustness property can be added. In this paper, error transfer function is specified with 1 delay element and robustness condition is considered additionally. As the criterion of the robustness, a weighted sensitive function's $H_{infty}$ norm is used. For the minimum value of the criterion, error transfer function's poles are calculated optimally. In this sense, optimal design of the continuous time deadbeat controller is obtained.

  • PDF

A study on skip-connection with time-frequency self-attention for improving speech enhancement based on complex-valued spectrum (복소 스펙트럼 기반 음성 향상의 성능 향상을 위한 time-frequency self-attention 기반 skip-connection 기법 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.2
    • /
    • pp.94-101
    • /
    • 2023
  • A deep neural network composed of encoders and decoders, such as U-Net, used for speech enhancement, concatenates the encoder to the decoder through skip-connection. Skip-connection helps reconstruct the enhanced spectrum and complement the lost information. The features of the encoder and the decoder connected by the skip-connection are incompatible with each other. In this paper, for complex-valued spectrum based speech enhancement, Self-Attention (SA) method is applied to skip-connection to transform the feature of encoder to be compatible with the features of decoder. SA is a technique in which when generating an output sequence in a sequence-to-sequence tasks the weighted average of input is used to put attention on subsets of input, showing that noise can be effectively eliminated by being applied in speech enhancement. The three models using encoder and decoder features to apply SA to skip-connection are studied. As experimental results using TIMIT database, the proposed methods show improvements in all evaluation metrics compared to the Deep Complex U-Net (DCUNET) with skip-connection only.

Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study

  • Lee, Jung Hwan;Han, In Ho;Kim, Dong Hwan;Yu, Seunghan;Lee, In Sook;Song, You Seon;Joo, Seongsu;Jin, Cheng-Bin;Kim, Hakil
    • Journal of Korean Neurosurgical Society
    • /
    • v.63 no.3
    • /
    • pp.386-396
    • /
    • 2020
  • Objective : To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. Methods : GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results : The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). Conclusion : This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.

Improvement of Rating Curve Fitting Considering Variance Function with Pseudo-likelihood Estimation (의사우도추정법에 의한 분산함수를 고려한 수위-유량 관계 곡선 산정법 개선)

  • Lee, Woo-Seok;Kim, Sang-Ug;Chung, Eun-Sung;Lee, Kil-Seong
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.8
    • /
    • pp.807-823
    • /
    • 2008
  • This paper presents a technique for estimating discharge rating curve parameters. In typical practical applications, the original non-linear rating curve is transformed into a simple linear regression model by log-transforming the measurement without examining the effect of log transformation. The model of pseudo-likelihood estimation is developed in this study to deal with heteroscedasticity of residuals in the original non-linear model. The parameters of rating curves and variance functions of errors are simultaneously estimated by the pseudo-likelihood estimation(P-LE) method. Simulated annealing, a global optimization technique, is adapted to minimize the log likelihood of the weighted residuals. The P-LE model was then applied to a hypothetical site where stage-discharge data were generated by incorporating various errors. Results of the P-LE model show reduced error values and narrower confidence intervals than those of the common log-transform linear least squares(LT-LR) model. Also, the limit of water levels for segmentation of discharge rating curve is estimated in the process of P-LE using the Heaviside function. Finally, model performance of the conventional log-transformed linear regression and the developed model, P-LE are computed and compared. After statistical simulation, the developed method is then applied to the real data sets from 5 gauge stations in the Geum River basin. It can be suggested that this developed strategy is applied to real sites to successfully determine weights taking into account error distributions from the observed discharge data.

A Study on the Build-up Model for the Discount Rate of Technology Valuation including Intellectual Property Risk (지식자산위험을 고려한 기술가치평가 할인율 적산모형에 관한 연구)

  • Sung, Oong-Hyun
    • Journal of Korea Technology Innovation Society
    • /
    • v.11 no.2
    • /
    • pp.241-263
    • /
    • 2008
  • Within any income approach, a discount rate is used to convert some projected free cash flow to its presented value. In case of valuing companies, the most frequently used discount rate is the weighted average cost of capital(WACC) at the aggregate level. But technology valuation is different to discounting aggregate corporate cash flow since it is concerned about individual Intellectual property. Therefore, blindly applying standard discount rate such as WACC in technology valuation is unlikely to lead to the right result. The primary focus of this paper is to establish the structure of discount rate for technology valuation and to suggest the method of estimation. To determine an appropriate discount rate for technology valuation, the level of technology risk, market risk and competitive risk should be included in the structure of discount rate. This paper suggests the build-up model which consists of three components as a expansion of the CAPM. It includes (1) a risk-free rate of return, (2) general market risk premium and beta and (3) intellectual property risk premium related to technology risk and specific target market risk. However, there is no specific check list for examining the intellectual property risk until now and no specific method for quantifying its risk into risk premium. This paper developed the 10 element to determine the level of the intellectual property risk and applied estimation function such as linear function, natural log function and exponential function to transform the level of risk into risk premium. The limitation of this paper is that the range of intellectual property risk premium is inferred based on the information of foreign and domestic valuation agency. Finally, this paper explored the development of an intellectual property discount rate for technology valuation and presented the method in order to quantify the intellectual property risk premium.

  • PDF

Image Watermark Method Using Multiple Decoding Keys (다중 복호화 키들을 이용한 영상 워터마크 방법)

  • Lee, Hyung-Seok;Seo, Dong-Hoan;Cho, Kyu-Bo
    • Korean Journal of Optics and Photonics
    • /
    • v.19 no.4
    • /
    • pp.262-269
    • /
    • 2008
  • In this paper, we propose an image watermark method using multiple decoding keys. The advantages of this method are that the multiple original images are reconstructed by using multiple decoding keys in the same watermark image, and that the quality of reconstructed images is clearly enhanced based on the idea of Walsh code without any side lobe components in the decoding process. The zero-padded original images, multiplied with random-phase pattern to each other, are Fourier transformed. Encoded images are then obtained by taking the real-valued data from these Fourier transformed images. The embedding images are obtained by the product of independent Walsh codes, and these spreaded phase-encoded images which are multiplied with new random-phase images. Also we obtain the decoding keys by multiplying these random-phase images with the same Walsh code images used in the embedding images. A watermark image is then made from the linear superposition of the weighted embedding images and a cover image, which is multiplied with a new independent Walsh code. The original image is simply reconstructed by the inverse-Fourier transform of the despreaded image of the multiplication between the watermark image and the decoding key. Computer simulations demonstrate the efficiency of the proposed watermark method with multiple decoding keys and a good robustness to the external attacks such as cropping and compression.