• Title/Summary/Keyword: Orthogonal set

Search Result 264, Processing Time 0.026 seconds

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Design the Structure of Scaling-Wavelet Mixed Neural Network (스케일링-웨이블릿 혼합 신경회로망 구조 설계)

  • Kim, Sung-Soo;Kim, Yong-Taek;Seo, Jae-Yong;Cho, Hyun-Chan;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.511-516
    • /
    • 2002
  • The neural networks may have problem such that the amount of calculation for the network learning goes too big according to the dimension of the dimension. To overcome this problem, the wavelet neural networks(WNN) which use the orthogonal basis function in the hidden node are proposed. One can compose wavelet functions as activation functions in the WNN by determining the scale and center of wavelet function. In this paper, when we compose the WNN using wavelet functions, we set a single scale function as a node function together. We intend that one scale function approximates the target function roughly, the other wavelet functions approximate it finely During the determination of the parameters, the wavelet functions can be determined by the global search for solutions suitable for the suggested problem using the genetic algorithm and finally, we use the back-propagation algorithm in the learning of the weights.

An ICA-Based Subspace Scanning Algorithm to Enhance Spatial Resolution of EEG/MEG Source Localization (뇌파/뇌자도 전류원 국지화의 공간분해능 향상을 위한 독립성분분석 기반의 부분공간 탐색 알고리즘)

  • Jung, Young-Jin;Kwon, Ki-Woon;Im, Chang-Hwan
    • Journal of Biomedical Engineering Research
    • /
    • v.31 no.6
    • /
    • pp.456-463
    • /
    • 2010
  • In the present study, we proposed a new subspace scanning algorithm to enhance the spatial resolution of electroencephalography (EEG) and magnetoencephalography(MEG) source localization. Subspace scanning algorithms, represented by the multiple signal classification (MUSIC) algorithm and the first principal vector (FINE) algorithm, have been widely used to localize asynchronous multiple dipolar sources in human cerebral cortex. The conventional MUSIC algorithm used principal component analysis (PCA) to extract the noise vector subspace, thereby having difficulty in discriminating two or more closely-spaced cortical sources. The FINE algorithm addressed the problem by using only a part of the noise vector subspace, but there was no golden rule to determine the number of noise vectors. In the present work, we estimated a non-orthogonal signal vector set using independent component analysis (ICA) instead of using PCA and performed the source scanning process in the signal vector subspace, not in the noise vector subspace. Realistic 2D and 3D computer simulations, which compared the spatial resolutions of various algorithms under different noise levels, showed that the proposed ICA-MUSIC algorithm has the highest spatial resolution, suggesting that it can be a useful tool for practical EEG/MEG source localization.

Approximate Shape Optimization Technique by Sequential Design Domain (순차설계영역을 이용한 근사 형상최적에 관한 연구)

  • 김우현;임오강
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.17 no.1
    • /
    • pp.31-38
    • /
    • 2004
  • Mechanical design process is generally accomplished by design, analysis, and test. Designers use programs fitting purpose, and obtain repeatedly a response of a simulation program, a sub-program for optimization. In this paper, shape optimization using approximate optimization technique is carried out with sequential design domain(SDD). In addition, algorithm executing Pro/Engineer and ANSYS automatically are adopted in the approximate optimization program by SDD. It is difficult for design problem to be approximated accurately for the whole range of design space. However, more or less accurate approximation is constructed if SDD is applied to that case. SDD starts with a certain range which is off-seted from midpoint of an initial design domain and then SDD of the next step is determined by a move limited. Convergence criterion is defined such that optimal point must be located within SDD during the two steps. Also, the PLBA(Pshenichny-Lim-Belegundu-Arora) algorithm is used to solve approximate optimization problems. This algorithm uses the second-order information and the active set strategy, in order to seek the direction of design variables.

Collapse response assessment of low-rise buildings with irregularities in plan

  • Manie, Salar;Moghadam, Abdoreza S.;Ghafory-Ashtiany, Mohsen
    • Earthquakes and Structures
    • /
    • v.9 no.1
    • /
    • pp.49-71
    • /
    • 2015
  • The present paper aims at evaluating damage and collapse behavior of low-rise buildings with unidirectional mass irregularities in plan (torsional buildings). In previous earthquake events, such buildings have been exposed to extensive damages and even total collapse in some cases. To investigate the performance and collapse behavior of such buildings from probabilistic points of view, three-dimensional three and six-story reinforced concrete models with unidirectional mass eccentricities ranging from 0% to 30% and designed with modern seismic design code provisions specific to intermediate ductility class were subjected to nonlinear static as well as extensive nonlinear incremental dynamic analysis (IDA) under a set of far-field real ground motions containing 21 two-component records. Performance of each model was then examined by means of calculating conventional seismic design parameters including the response reduction (R), structural overstrength (${\Omega}$) and structural ductility (${\mu}$) factors, calculation of probability distribution of maximum inter-story drift responses in two orthogonal directions and calculation collapse margin ratio (CMR) as an indicator of performance. Results demonstrate that substantial differences exist between the behavior of regular and irregular buildings in terms of lateral load capacity and collapse margin ratio. Also, results indicate that current seismic design parameters could be non-conservative for buildings with high levels of plan eccentricity and such structures do not meet the target "life safety" performance level based on safety margin against collapse. The adverse effects of plan irregularity on collapse safety of structures are more pronounced as the number of stories increases.

Structural identification based on substructural technique and using generalized BPFs and GA

  • Ghaffarzadeh, Hosein;Yang, T.Y.;Ajorloo, Yaser Hosseini
    • Structural Engineering and Mechanics
    • /
    • v.67 no.4
    • /
    • pp.359-368
    • /
    • 2018
  • In this paper, a method is presented to identify the physical and modal parameters of multistory shear building based on substructural technique using block pulse generalized operational matrix and genetic algorithm. The substructure approach divides a complete structure into several substructures in order to significantly reduce the number of unknown parameters for each substructure so that identification processes can be independently conducted on each substructure. Block pulse functions are set of orthogonal functions that have been used in recent years as useful tools in signal characterization. Assuming that the input-outputs data of the system are known, their original BP coefficients can be calculated using numerical method. By using generalized BP operational matrices, substructural dynamic vibration equations can be converted into algebraic equations and based on BP coefficient for each story can be estimated. A cost function can be defined for each story based on original and estimated BP coefficients and physical parameters such as mass, stiffness and damping can be obtained by minimizing cost functions with genetic algorithm. Then, the modal parameters can be computed based on physical parameters. This method does not require that all floors are equipped with sensor simultaneously. To prove the validity, numerical simulation of a shear building excited by two different normally distributed random signals is presented. To evaluate the noise effect, measurement random white noise is added to the noise-free structural responses. The results reveal the proposed method can be beneficial in structural identification with less computational expenses and high accuracy.

Performance of OFDM using Beam-switching and Space-Time coding in Wireless Personal Area Network (무선 개인 영역망 환경에서 빔 스위칭과 시공간부호를 적용한 OFDM 전송방식의 성능)

  • Yoon, Seok-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.7
    • /
    • pp.85-92
    • /
    • 2010
  • In this paper, we consider the orthogonal frequency division multiplexing (OFDM) based transmission incoorperating with beam-switching and space-time coding. Specifically, we consider three configurations; (1) the beamforming technique, (2) the spatial diversity technique and (3) their combination and evaluate the performance in wireless personal area network (WPAN) environment. For the beam-forming technique, we consider the beam-switching which is performed at RF front-end with a pre-defined set of beams and for the space-time coding, we consider the Alamauti scheme with antenna selection. For the combined scheme, we divide the antennas used into two group to generate two independent beams and apply the two-antenna Alamauti scheme over the two beams. For these three configurations, performance is evaluated in terms of the SNR gain.

Eliciting stated preferences for drugs reimbursement decision criteria in South Korea (선택실험법을 이용한 의약품 급여결정기준에 대한 선호분석)

  • Lim, Min-Kyoung;Bae, Eun-Young
    • Health Policy and Management
    • /
    • v.19 no.4
    • /
    • pp.98-120
    • /
    • 2009
  • The purpose of this study is to elicit preference for drug listing decision criteria and to estimate the ICER threshold in South Korea using the discrete choice experiment (DCE) method. To collect the data, a DCE survey was administered to a subject sample either educated in the principle concepts of pharmacoeconomics or were decision makers within that field. Subjects chose between alternative drug profiles differing in four attributes: ICER, uncertainty, budget impact and severity of disease. The orthogonal and balanced designs were determined through computer algorithm to take the optimal set of drug profiles. The survey employed 15 hypothetical choice sets. A random effect probit model was used to analyze the relative importance of attributes and the probabilities of a recommendation response. Parameter estimates from the models indicated that three attributes (ICER, Impact, Severity of disease) influenced respondents' choice significantly(p${\pm}$0.001). In addition, each parameter displayed an expected sign. The Lower the ICER, the higher the probability of choosing that alternative. Respondents also preferred low levels of uncertainty and smaller impact on health service budget. They were also more likely to choose drugs for serious diseases rather than mild or moderate ones. Uncertainty however is not statistically significant. The ICER threshold, at which the probability of a recommendation was 0.5, was 29,000,000 KW/QALY in expert group and 46,500,000 KW/QALY in industry group. We also found that those in our sample were willing to accept high ICER to get medication for severe diseases. This study demonstrates that the cost-effectiveness, budget impact and severity of disease are the main reimbursement decision criteria in South Korea, and that DCE can be a useful tool in analyzing the decision making process where a variety of factors are considered and prioritized.

Physically Compatible Characteristic Length of Cutting Edge Geometry (공구날 특이길이의 물리적 적합성 고찰)

  • Ahn, Il-Hyuk;Kim, Ik-Hyun;Hwang, Ji-Hong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.3
    • /
    • pp.279-288
    • /
    • 2012
  • The material removal mechanism in machining is significantly affected by the cutting edge geometry. Its effect becomes even more substantial when the depth of cut is relatively small as compared to the characteristic length which represents the shape and size of the cutting edge. Conventionally, radius or focal length has been employed as the characteristic length with the assumption that the shape of cutting edge is round or parabolic. However, in reality, there could be various ways to determine the radius or focal length even for the same tool edge profile, depending on the region to be considered as cutting edge in the measured profile and the constraints to be set in constructing the best fitted circle or parabola. In this regard, the present study proposes various models to determine the characteristic length in terms of radius or focal length. Their physical compatibility are validated by carrying out 2D orthogonal cutting experiments using inserts with a wide range of characteristic length ($30{\sim}180\;{\mu}m$ in terms of radius) and then by investigating the correlation between the characteristic length and the cutting forces. Such validation is based on the common belief that the larger the characteristic length is, the blunter the cutting edge is and the higher the cutting forces are. Interestingly, the results showed that the correlation is higher for the radius or focal length obtained with a constraint that the center of best fitted circle or the focus of the best fitted parabola should be on the bisectional line of the wedge angle of tool.

Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle

  • Naserkheil, Masoumeh;Miraie-Ashtiani, Seyed Reza;Nejati-Javaremi, Ardeshir;Son, Jihyun;Lee, Deukhwan
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.29 no.12
    • /
    • pp.1682-1687
    • /
    • 2016
  • The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage ($0.213{\pm}0.007$). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran.