• Title/Summary/Keyword: two-stage sampling

Search Result 256, Processing Time 0.024 seconds

Factors Affecting Acceptance and Use of E-Tax Services among Medium Taxpayers in Phnom Penh, Cambodia

  • ANN, Samnang;DAENGDEJ, Jirapun;VONGURAI, Rawin
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.7
    • /
    • pp.79-90
    • /
    • 2021
  • The purpose of this research is to identify factors affecting the acceptance and use of e-tax services among medium taxpayers in Phnom Penh, Cambodia. The researcher conducted the study based on a quantitative approach by using multi-stage sampling method, which selects a sample size by two or more stages. The first stage sampling was the stratified random sampling and the subsequent stage was purposive sampling. In this study, the stratified random sampling was first used, followed by purposive sampling. The data were collected from 450 medium taxpayers who experienced using e-tax services located in three tax branches in Phnom Penh. This study adapted the confirmatory factor analysis (CFA) and structural equation model (SEM) to analyze the model accuracy, reliability and influence of various variables. The primary result showed that behavioral intention has a significant effect on user behavior of e-tax services among medium taxpayers in Phnom Penh, Cambodia. Moreover, the results revealed that performance expectancy, effort expectancy, social influence, and anxiety have significant impact on behavioral intention. In addition, social influence has the strongest impact on behavioral intention, followed by anxiety, performance expectancy and effort expectancy. Conversely, facilitating conditions, trust in government, and trust in internet do not influence behavioral intention.

Novel Schemes to Optimize Sampling Rate for Compressed Sensing

  • Zhang, Yifan;Fu, Xuan;Zhang, Qixun;Feng, Zhiyong;Liu, Xiaomin
    • Journal of Communications and Networks
    • /
    • v.17 no.5
    • /
    • pp.517-524
    • /
    • 2015
  • The fast and accurate spectrum sensing over an ultra-wide bandwidth is a big challenge for the radio environment cognition. Considering sparse signal feature, two novel compressed sensing schemes are proposed, which can reduce compressed sampling rate in contrast to the traditional scheme. One algorithm is dynamically adjusting compression ratio based on modulation recognition and identification of symbol rate, which can reduce compression ratio. Furthermore, without priori information of the modulation and symbol rate, another improved algorithm is proposed with the application potential in practice, which does not need to reconstruct the signals. The improved algorithm is divided into two stages, which are the approaching stage and the monitoring stage. The overall sampling rate can be dramatically reduced without the performance deterioration of the spectrum detection compared to the conventional static compressed sampling rate algorithm. Numerous results show that the proposed compressed sensing technique can reduce sampling rate by 35%, with an acceptable detection probability over 0.9.

Corresponding between Error Probabilities and Bayesian Wrong Decision Lasses in Flexible Two-stage Plans

  • Ko, Seoung-gon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.4
    • /
    • pp.435-441
    • /
    • 2000
  • Ko(1998, 1999) proposed certain flexible two-stage plans that could be served as one-step interim analysis in on-going clinical trials. The proposed Plans are optimal simultaneously in both a Bayes and a Neyman-Pearson sense. The Neyman-Pearson interpretation is that average expected sample size is being minimized, subject just to the two overall error rates $\alpha$ and $\beta$, respectively of first and second kind. The Bayes interpretation is that Bayes risk, involving both sampling cost and wrong decision losses, is being minimized. An example of this correspondence are given by using a binomial setting.

  • PDF

An Evaluation of Sampling Design for Estimating an Epidemiologic Volume of Diabetes and for Assessing Present Status of Its Control in Korea (우리나라 당뇨병의 역학적 규모와 당뇨병 관리현황 파악을 위한 표본설계의 평가)

  • Lee, Ji-Sung;Kim, Jai-Yong;Baik, Sei-Hyun;Park, Ie-Byung;Lee, June-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.42 no.2
    • /
    • pp.135-142
    • /
    • 2009
  • Objectives : An appropriate sampling strategy for estimating an epidemiologic volume of diabetes has been evaluated through a simulation. Methods : We analyzed about 250 million medical insurance claims data submitted to the Health Insurance Review & Assessment Service with diabetes as principal or subsequent diagnoses, more than or equal to once per year, in 2003. The database was re-constructed to a 'patient-hospital profile' that had 3,676,164 cases, and then to a 'patient profile' that consisted of 2,412,082 observations. The patient profile data was then used to test the validity of a proposed sampling frame and methods of sampling to develop diabetic-related epidemiologic indices. Results : Simulation study showed that a use of a stratified two-stage cluster sampling design with a total sample size of 4,000 will provide an estimate of 57.04%(95% prediction range, 49.83 - 64.24%) for a treatment prescription rate of diabetes. The proposed sampling design consists, at first, stratifying the area of the nation into "metropolitan/city/county" and the types of hospital into "tertiary/secondary/primary/clinic" with a proportion of 5:10:10:75. Hospitals were then randomly selected within the strata as a primary sampling unit, followed by a random selection of patients within the hospitals as a secondly sampling unit. The difference between the estimate and the parameter value was projected to be less than 0.3%. Conclusions : The sampling scheme proposed will be applied to a subsequent nationwide field survey not only for estimating the epidemiologic volume of diabetes but also for assessing the present status of nationwide diabetes control.

Improved Lateral Resolution of Interferometric Microscope Using Precision Scanner (정밀 스캐너를 이용한 간섭 현미경의 가로방향 분해능 향상)

  • 박성림;박도민;류재욱;권대갑
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.6
    • /
    • pp.116-123
    • /
    • 1998
  • An interferometric microscope with an improved lateral resolution is presented. The nanometer resolution XY stage is integrated into standard temporal phase shifting interferometer. The nanometer resolution XY stage is used to position specimen in subpixel of CCD detector, therefore CCD detector's sampling is improved. Two scanning algorithms and those simulation results are also presented. The simulation results show that scanning algorithms improve CCD detector's sampling significantly, and interferometeric microscope's lateral resolution is improved also.

  • PDF

Design of Low Area Decimation Filters Using CIC Filters (CIC 필터를 이용한 저면적 데시메이션 필터 설계)

  • Kim, Sunhee;Oh, Jaeil;Hong, Dae-ki
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.71-76
    • /
    • 2021
  • Digital decimation filters are used in various digital signal processing systems using ADCs, including digital communication systems and sensor network systems. When the sampling rate of digital data is reduced, aliasing occurs. So, an anti-aliasing filter is necessary to suppress aliasing before down-sampling the data. Since the anti-aliasing filter has to have a sharp transition band between the passband and the stopband, the order of the filter is very high. However, as the order of the filter increases, the complexity and area of the filter increase, and more power is consumed. Therefore, in this paper, we propose two types of decimation filters, focusing on reducing the area of the hardware. In both cases, the complexity of the circuit is reduced by applying the required down-sampling rate in two times instead of at once. In addition, CIC decimation filters without a multiplier are used as the decimation filter of the first stage. The second stage is implemented using a CIC filter and a down sampler with an anti-aliasing filter, respectively. It is designed with Verilog-HDL and its function and implementation are validated using ModelSim and Quartus, respectively.

Sampling Method for Individual Particle Analysis of Atmospheric Aerosol (개별입자 분석을 위한 대기에어로졸의 시료채취법)

  • Seong-Woo Cheon;Jeong-Ho Park
    • Journal of Environmental Science International
    • /
    • v.33 no.2
    • /
    • pp.113-119
    • /
    • 2024
  • In this study, the most suitable sampling methods for the bimodal mass distribution characteristics and individual particle analysis of atmospheric aerosols were investigated. Samples collected in Quartz, Teflon, and Nuclepore filters were analyzed for individual particles using scanning electron microscopy with an energy-dispersive X-ray spectrometer (SEM/EDS). Then, the pore diameter of the filter and the collection flow rate were determined using the theoretical collection efficiency calculation formula for two-stage separation sample collection of coarse and fine particles. The Nuclepore filter was found to be the most suitable filter for identifying the physical and chemical characteristics of atmospheric aerosols since it was able to separate the sample and count the different sized particles better than either Quartz or Teflon. Nuclepore filters with 8.0 ㎛ and 0.4 ㎛ pores were connected in series and exposed to a flow rate of 16.7 L/min for two-stage separation sampling. The results show that it is possible to separate and collect both coarse and fine particles. We expect that the proposed methodology will be used for future individual particle analysis of atmospheric aerosols and related research.

Korean women wage analysis using selection models (표본 선택 모형을 이용한 국내 여성 임금 데이터 분석)

  • Jeong, Mi Ryang;Kim, Mijeong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.5
    • /
    • pp.1077-1085
    • /
    • 2017
  • In this study, we have found the major factors which affect Korean women's wage analysing the data provided by 2015 Korea Labor Panel Survey (KLIPS). In general, wage data is difficult to analyze because random sampling is infeasible. Heckman sample selection model is the most widely used method for analysing the data with sample selection. Heckman proposed two kinds of selection models: the one is the model with maximum likelihood method and the other is the Heckman two stage model. Heckman two stage model is known to be robust to the normal assumption of bivariate error terms. Recently, Marchenko and Genton (2012) proposed the Heckman selectiont model which generalizes the Heckman two stage model and concluded that Heckman selection-t model is more robust to the error assumptions. Employing the two models, we carried out the analysis of the data and we compared those results.

A Study on the Survey of Vocational Training Teachers and Instructors through Institutional Panel Sampling Design (기관패널 표집설계를 통한 훈련 교·강사 실태조사 방안 연구)

  • Jung, Hye-kyung;Jung, Il-chan;Lee, Jin-gu
    • Journal of Practical Engineering Education
    • /
    • v.13 no.2
    • /
    • pp.393-403
    • /
    • 2021
  • The purpose of this study is to propose a method of designing a systematic panel survey at the institutional level to lay the foundation for data-based decision-making using vocational training teachers and instructors as the population. In this study, the target population and sampling frame, which are the main elements necessary for planning a panel survey, are proposed. Also based on expert advice and empirical data analysis, the sampling unit and sampling method taking into account the outer and inner variables are presented, comprehensively considering the representativeness of data, the efficiency and sustainability of data collection. As a result of the study, with the unit of the panel as a vocational training institution, a two-stage stratified proportional sampling plan is proposed so that the institution selected as the panel and the vocational training teachers and instructors belonging to the institution can participate in the survey. Based on this, implications for the panel survey sample design are presented.

Pooling shrinkage estimator of reliability for exponential failure model using the sampling plan (n, C, T)

  • Al-Hemyari, Z.A.;Jehel, A.K.
    • International Journal of Reliability and Applications
    • /
    • v.12 no.1
    • /
    • pp.61-77
    • /
    • 2011
  • One of the most important problems in the estimation of the parameter of the failure model, is the cost of experimental sampling units, which can be reduced by using any prior information available about ${\theta}$, and devising a two-stage pooling shrunken estimation procedure. We have proposed an estimator of the reliability function (R(t)) of the exponential model using two-stage time censored data when a prior value about the unknown parameter (${\theta}$) is available from the past. To compare the performance of the proposed estimator with the classical estimator, computer intensive calculations for bias, mean squared error, relative efficiency, expected sample size and percentage of the overall sample size saved expressions, were done for varying the constants involved in the proposed estimator (${\tilde{R}}$(t)).

  • PDF