• Title/Summary/Keyword: Log-transformation

Search Result 97, Processing Time 0.031 seconds

Studies on the Stochastic Generation of Synthetic Streamflow Sequences(I) -On the Simulation Models of Streamflow- (하천유량의 추계학적 모의발생에 관한 연구(I) -하천유량의 Simulation 모델에 대하여-)

  • 이순탁
    • Water for future
    • /
    • v.7 no.1
    • /
    • pp.71-77
    • /
    • 1974
  • This paper reviews several different single site generation models for further development of a model for generating the Synthetic sequences of streamflow in the continuous streams like main streams in Korea. Initially the historical time series is looked using a time series technique, that is correlograms, to determine whether a lag one Markov model will satisfactorily represent the historical data. The single site models which were examined include an empirical model using the historical probability distribution of the random component, the linear autoregressive model(Markov model, or Thomas-Fiering model) using both logarithms of the data and Matala's log-normal transformation equations, and finally gamma distribution model.

  • PDF

Penalized variable selection for accelerated failure time models

  • Park, Eunyoung;Ha, Il Do
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.591-604
    • /
    • 2018
  • The accelerated failure time (AFT) model is a linear model under the log-transformation of survival time that has been introduced as a useful alternative to the proportional hazards (PH) model. In this paper we propose variable-selection procedures of fixed effects in a parametric AFT model using penalized likelihood approaches. We use three popular penalty functions, least absolute shrinkage and selection operator (LASSO), adaptive LASSO and smoothly clipped absolute deviation (SCAD). With these procedures we can select important variables and estimate the fixed effects at the same time. The performance of the proposed method is evaluated using simulation studies, including the investigation of impact of misspecifying the assumed distribution. The proposed method is illustrated with a primary biliary cirrhosis (PBC) data set.

Development of an algorithm for solving correspondence problem in stereo vision (스테레오 비젼에서 대응문제 해결을 위한 알고리즘의 개발)

  • Im, Hyuck-Jin;Gweon, Dae-Gab
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.10 no.1
    • /
    • pp.77-88
    • /
    • 1993
  • In this paper, we propose a stereo vision system to solve correspondence problem with large disparity and sudden change in environment which result from small distance between camera and working objects. First of all, a specific feature is divided by predfined elementary feature. And then these are combined to obtain coded data for solving correspondence problem. We use Neural Network to extract elementary features from specific feature and to have adaptability to noise and some change of the shape. Fourier transformation and Log-polar mapping are used for obtaining appropriate Neural Network input data which has a shift, scale, and rotation invariability. Finally, we use associative memory to obtain coded data of the specific feature from the combination of elementary features. In spite of specific feature with some variation in shapes, we could obtain satisfactory 3-dimensional data from corresponded codes.

  • PDF

The Bioequivalence of Plunazol Tablet (Fluconazole 150 mg) to Three capsules of Diflucan 50 mg (디푸루칸 캡슐 50 mg (3 캡슐, 플루코나졸 150mg)에 대한 푸루나졸 정 150mg의 생물학적 동등성)

  • Chang, Hee-Chul;Lee, Min-Suk;Ryu, Chong-Hyon;Lyu, Seung-Hyo;Cho, Sang-Heon;Choi, Yeon-Jin;Hwang, Ae-Kyung;Kim, Yun-Ah;Park, Si-Hyun;Yoon, Ji-Won;Bae, Kyun-Seop
    • Journal of Pharmaceutical Investigation
    • /
    • v.39 no.3
    • /
    • pp.207-216
    • /
    • 2009
  • Fluconazole is used as an orally administrated antifungal drug for the treatment of tinea corporis, candidiasis including skin mycotic pneumonia infections. The dosage of fluconazole varies with indication ranging from 50 mg/day to 400 mg/day. The fluconazole capsule 50 mg (3 capsules daily) is already available in Korean market. To improve the patient compliance, a fluconazole tablet 150 mg (once a day administration) was developed recently. The purpose of this study was to evaluate the bioequivalence of three doses of fluconazole capsule 50 mg (Diflucan 50 mg, Pfizer Korea Inc., as a reference drug) and a single dose of fluconazole tablet 150 mg (Plunazol 150 mg, Daewoong Pharm. Co., Korea) according to the guidelines of the Korea Food and Drug Administration (KFDA). The bioequivalence for three capsules of Diflucan 50 mg and a single tablet of Plunazol 150 mg was investigated in twenty-four healthy male volunteers under a randomized 2${\times}$2 crossover trial design. The average age of twenty-four volunteers was 24.78${\pm}$3.27 year-old, average height was 175.56${\pm}$5.45 cm and average weight was 67.24${\pm}$6.86 kg. After three capsules of Diflucan 50 mg or a single tablet of Plunazol 150 mg were orally administered, blood was taken at predetermined time intervals and the plasma concentrations of fluconazole in plasma were determined using LC-MS-MS. The 90% confidence intervals for the main parameters of statistical results after logarithmic transformation were AUCt 0.9272-1.0084 and Cmax 0.8423-0.9544 respectively, which are in the range of log 0.8 to log 1.25 and the statistical results of additional parameters (AUClast, t1/2 and MRT) were also in the 90% confidence interval that is in the range of log 0.8 to log 1.25. Therefore, the results of this study confirm the bioequivalence of three capsules of Diflucan 50 mg to one tablet of Plunazol 150 mg.

A study on the optimal variable transformation method to identify the correlation between ATP and APC (ATP와 APC 간의 관련성 규명을 위한 최적의 변수변환법에 관한 연구)

  • Moon, Hye-Kyung;Shin, Jae-Kyoung;Kim, Yang Sook
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.6
    • /
    • pp.1465-1475
    • /
    • 2016
  • In order to secure safe meals, the hazards of microorganisms associated with food poisoning accident should be monitored and controlled in real situations. It is necessary to determined the correlation between existing common bacteria number (aerobic plate count; APC) and RLU (relative light unit) in cookware. In this paper, we investigate the correlation between ATP (RUL) and APC (CFU) by using three types of transform (inverse, square root, log transforms) of raw data in two steps. Among these transforms, the log transform at the first step has been found to be optimal for the data of cutting board, knife, soup bowl (stainless), and tray (carbon). The square root-inverse and the square root-square root transform at the second step have been shown to be optimal respectively for the cup and for the soup bowl (carbon) data.

A Content-Aware toad Balancing Technique Based on Histogram Transformation in a Cluster Web Server (클러스터 웹 서버 상에서 히스토그램 변환을 이용한 내용 기반 부하 분산 기법)

  • Hong Gi Ho;Kwon Chun Ja;Choi Hwang Kyu
    • Journal of Internet Computing and Services
    • /
    • v.6 no.2
    • /
    • pp.69-84
    • /
    • 2005
  • As the Internet users are increasing rapidly, a cluster web server system is attracted by many researchers and Internet service providers. The cluster web server has been developed to efficiently support a larger number of users as well as to provide high scalable and available system. In order to provide the high performance in the cluster web server, efficient load distribution is important, and recently many content-aware request distribution techniques have been proposed. In this paper, we propose a new content-aware load balancing technique that can evenly distribute the workload to each node in the cluster web server. The proposed technique is based on the hash histogram transformation, in which each URL entry of the web log file is hashed, and the access frequency and file size are accumulated as a histogram. Each user request is assigned into a node by mapping of (hashed value-server node) in the histogram transformation. In the proposed technique, the histogram is updated periodically and then the even distribution of user requests can be maintained continuously. In addition to the load balancing, our technique can exploit the cache effect to improve the performance. The simulation results show that the performance of our technique is quite better than that of the traditional round-robin method and we can improve the performance more than $10\%$ compared with the existing workload-aware load balancing(WARD) method.

  • PDF

A Comparative Study on Structural Reliability Analysis Methods (구조 신뢰성 해석방법의 고찰)

  • 양영순;서용석
    • Computational Structural Engineering
    • /
    • v.7 no.1
    • /
    • pp.109-116
    • /
    • 1994
  • In this paper, various reliability analysis methods for calculating a probability of failure are investigated for their accuracy and efficiency. Crude Monte Carlo method is used as a basis for the comparison of various numerical results. For the sampling methods, Importance Sampling method and Directional Simulation method are considered for overcoming a drawback of Crude Monte Carlo method. For the approximate methods, conventional Rackwitz-Fiessler method. 3-parameter Chen-Lind method, and Rosenblatt transformation method are compared on the basis of First order reliability method. As a Second-order reliability method, Curvature-Fitting paraboloid method, Point-fitting paraboloid method, and Log-likelihood function method are explored in order to verify the accuracy of the reliability calculation results. These methods mentioned above would have some difficulty unless the limit state equation is expressed explicitly in terms of random design variables. Thus, there is a need to develop some general reliability methods for the case where an implicit limit state equation is given. For this purpose, Response surface method is used where the limit state equation is approximated by regression analysis of the response surface outcomes resulted from the structural analysis. From the application of these various reliability methods to three examples, it is found that Directional Simulation method and Response Surface method are very efficient and recommendable for the general reliability analysis problem cases.

  • PDF

White-Box AES Implementation Revisited

  • Baek, Chung Hun;Cheon, Jung Hee;Hong, Hyunsook
    • Journal of Communications and Networks
    • /
    • v.18 no.3
    • /
    • pp.273-287
    • /
    • 2016
  • White-box cryptography presented by Chow et al. is an obfuscation technique for protecting secret keys in software implementations even if an adversary has full access to the implementation of the encryption algorithm and full control over its execution platforms. Despite its practical importance, progress has not been substantial. In fact, it is repeated that as a proposal for a white-box implementation is reported, an attack of lower complexity is soon announced. This is mainly because most cryptanalytic methods target specific implementations, and there is no general attack tool for white-box cryptography. In this paper, we present an analytic toolbox on white-box implementations of the Chow et al.'s style using lookup tables. According to our toolbox, for a substitution-linear transformation cipher on n bits with S-boxes on m bits, the complexity for recovering the $$O\((3n/max(m_Q,m))2^{3max(m_Q,m)}+2min\{(n/m)L^{m+3}2^{2m},\;(n/m)L^32^{3m}+n{\log}L{\cdot}2^{L/2}\}\)$$, where $m_Q$ is the input size of nonlinear encodings,$m_A$ is the minimized block size of linear encodings, and $L=lcm(m_A,m_Q)$. As a result, a white-box implementation in the Chow et al.'s framework has complexity at most $O\(min\{(2^{2m}/m)n^{m+4},\;n{\log}n{\cdot}2^{n/2}\}\)$ which is much less than $2^n$. To overcome this, we introduce an idea that obfuscates two advanced encryption standard (AES)-128 ciphers at once with input/output encoding on 256 bits. To reduce storage, we use a sparse unsplit input encoding. As a result, our white-box AES implementation has up to 110-bit security against our toolbox, close to that of the original cipher. More generally, we may consider a white-box implementation of the t parallel encryption of AES to increase security.

A Study on Improving Precision Rate in Security Events Using Cyber Attack Dictionary and TF-IDF (공격키워드 사전 및 TF-IDF를 적용한 침입탐지 정탐률 향상 연구)

  • Jongkwan Kim;Myongsoo Kim
    • Convergence Security Journal
    • /
    • v.22 no.2
    • /
    • pp.9-19
    • /
    • 2022
  • As the expansion of digital transformation, we are more exposed to the threat of cyber attacks, and many institution or company is operating a signature-based intrusion prevention system at the forefront of the network to prevent the inflow of attacks. However, in order to provide appropriate services to the related ICT system, strict blocking rules cannot be applied, causing many false events and lowering operational efficiency. Therefore, many research projects using artificial intelligence are being performed to improve attack detection accuracy. Most researches were performed using a specific research data set which cannot be seen in real network, so it was impossible to use in the actual system. In this paper, we propose a technique for classifying major attack keywords in the security event log collected from the actual system, assigning a weight to each key keyword, and then performing a similarity check using TF-IDF to determine whether an actual attack has occurred.

Wavelet Based Non-Local Means Filtering for Speckle Noise Reduction of SAR Images (SAR 영상에서 웨이블렛 기반 Non-Local Means 필터를 이용한 스펙클 잡음 제거)

  • Lee, Dea-Gun;Park, Min-Jea;Kim, Jeong-Uk;Kim, Do-Yun;Kim, Dong-Wook;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.3
    • /
    • pp.595-607
    • /
    • 2010
  • This paper addresses the problem of reducing the speckle noise in SAR images by wavelet transformation, using a non-local means(NLM) filter originated for Gaussian noise removal. Log-transformed SAR image makes multiplicative speckle noise additive. Thus, non-local means filtering and wavelet thresholding are used to reduce the additive noise, followed by an exponential transformation. NLM filter is an image denoising method that replaces each pixel by a weighted average of all the similarly pixels in the image. But the NLM filter takes an acceptable amount of time to perform the process for all possible pairs of pixels. This paper, also proposes an alternative strategy that uses the t-test more efficiently to eliminate pixel pairs that are dissimilar. Extensive simulations showed that the proposed filter outperforms many existing filters terms of quantitative measures such as PSNR and DSSIM as well as qualitative judgments of image quality and the computational time required to restore images.