• Title/Summary/Keyword: complex auxiliary information

Search Result 18, Processing Time 0.026 seconds

Integrated calibration weighting using complex auxiliary information (통합 칼리브레이션 가중치 산출 비교연구)

  • Park, Inho;Kim, Sujin
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.427-438
    • /
    • 2021
  • Two-stage sampling allows us to estimate population characteristics by both unit and cluster level together. Given a complex auxiliary information, integrated calibration weighting would better reflect the level-wise characteristics as well as multivariate characteristics between levels. This paper explored the integrated calibration weighting methods by Estevao and Särndal (2006) and Kim (2019) through a simulation study, where the efficiency of those weighting methods was compared using an artificial population data. Two weighting methods among others are shown efficient: single step calibration at the unit level with stacked individualized auxiliary information and iterative integrated calibration at each level. Under both methods, cluster calibrated weights are defined as the average of the calibrated weights of the unit(s) within cluster. Both were very good in terms of the goodness-of-fit of estimating the population totals of mutual auxiliary information between clusters and units, and showed small relative bias and relative mean square root errors for estimating the population totals of survey variables that are not included in calibration adjustments.

Precise Stereo Matching Technique Based on White Auxiliary Stripe (백색 보조 띠 기반의 정밀 스테레오 정합 기법)

  • Kang, Han-Sol;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1356-1367
    • /
    • 2019
  • This paper proposes a novel active stereo matching technique using white auxiliary stripe pattern. The conventional active stereo matching techniques that uses two cameras and an active source such as projector can accurately estimate disparity information even in the areas with low texture compared to the passive ones. However, it is difficult that the conventional active stereo matching techniques using color code patterns acquire these patterns robustly if the object is composed of various colors or is exposed to complex lighting condition. To overcome this problem, the proposed method uses an additional white auxiliary stripe pattern to get and localize the color code patterns robustly. This paper proposes a process based on adaptive thresholding and thinning to obtain the auxiliary pattern accurately. Experimental results show that the proposed method more precisely measures the stepped sample whose depth is known in advance than the conventional method.

CFG based Korean Parsing Using Sentence Patterns as Syntactic Constraint (구문 제약으로 문형을 사용하는 CFG기반의 한국어 파싱)

  • Park, In-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.958-963
    • /
    • 2008
  • Korean language has different structural properties which are controlled by semantic constraints of verbs. Also, most of Korean sentences are complex sentences which consisted of main clause and embedded clause. Therefore it is difficult to describe appropriate syntactic grammar or constraint for the Korean language and the Korean parsing causes various syntactic ambiguities. In this paper, we suggest how to describe CFG-based grammar using sentence patterns as syntactic constraint and solve syntactic ambiguities. To solve this, we classified 44 sentence patterns including complex sentences which have subordinate clause in Korean sentences and used it to reduce syntactic ambiguity. However, it is difficult to solve every syntactic ambiguity using the information of sentence patterns. So, we used semantic markers with semantic constraint. Semantic markers can be used to solve ambiguity by auxiliary particle or comitative case particle.

A New Dispatch Scheduling Algorithm Applicable to Interconnected Regional Systems with Distributed Inter-temporal Optimal Power Flow (분산처리 최적조류계산 기반 연계계통 급전계획 알고리즘 개발)

  • Chung, Koo-Hyung;Kang, Dong-Joo;Kim, Bal-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.10
    • /
    • pp.1721-1730
    • /
    • 2007
  • SThis paper proposes a new dispatch scheduling algorithm in interconnected regional system operations. The dispatch scheduling formulated as mixed integer non-linear programming (MINLP) problem can efficiently be computed by generalized Benders decomposition (GBD) algorithm. GBD guarantees adequate computation speed and solution convergency since it decomposes a primal problem into a master problem and subproblems for simplicity. In addition, the inter-temporal optimal power flow (OPF) subproblem of the dispatch scheduling problem is comprised of various variables and constraints considering time-continuity and it makes the inter-temporal OPF complex due to increased dimensions of the optimization problem. In this paper, regional decomposition technique based on auxiliary problem principle (APP) algorithm is introduced to obtain efficient inter-temporal OPF solution through the parallel implementation. In addition, it can find the most economic dispatch schedule incorporating power transaction without private information open. Therefore, it can be expanded as an efficient dispatch scheduling model for interconnected system operation.

Determination of dosing rate for water treatment using fusion of genetic algorithms and fuzzy inference system (유전알고리즘과 퍼지추론시스템의 합성을 이용한 정수처리공정의 약품주입률 결정)

  • 김용열;강이석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.952-955
    • /
    • 1996
  • It is difficult to determine the feeding rate of coagulant in water treatment process, due to nonlinearity, multivariables and slow response characteristics etc. To deal with this difficulty, the fusion of genetic algorithms and fuzzy inference system was used in determining of feeding rate of coagulant. The genetic algorithms are excellently robust in complex operation problems, since it uses randomized operators and searches for the best chromosome without auxiliary information from a population consists of codings of parameter set. To apply this algorithms, we made the look up table and membership function from the actual operation data of water treatment process. We determined optimum dosages of coagulant (PAC, LAS etc.) by the fuzzy operation, and compared it with the feeding rate of the actual operation data.

  • PDF

Analyzing Preprocessing for Correcting Lighting Effects in Hyperspectral Images (초분광영상의 조명효과 보정 전처리기법 분석)

  • Yeong-Sun Song
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.5
    • /
    • pp.785-792
    • /
    • 2023
  • Because hyperspectral imaging provides detailed spectral information across a broad range of wavelengths, it can be utilized in numerous applications, including environmental monitoring, food quality inspection, medical diagnosis, material identification, art authentication, and crime scene analysis. However, hyperspectral images often contain various types of distortions due to the environmental conditions during image acquisition, which necessitates the proper removal of these distortions through a data preprocessing process. In this study, a preprocessing method was investigated to effectively correct the distortion caused by artificial light sources used in indoor hyperspectral imaging. For this purpose, a halogen-tungsten artificial light source was installed indoors, and hyperspectral images were acquired. The acquired images were then corrected for distortion using a preprocessing that does not require complex auxiliary equipment. After the corrections were made, the results were analyzed. According to the analysis, a statistical transformation technique using mean and standard deviation with reference to a reference signal was found to be the most effective in correcting distortions caused by artificial light sources.

A Conveyor Algorithm for Complete Consistency of Materialized View in a Self-Maintenance (실체 뷰의 자기관리에서 완전일관성을 위한 컨베이어 알고리듬)

  • Hong, In-Hoon;Kim, Yon-Soo
    • IE interfaces
    • /
    • v.16 no.2
    • /
    • pp.229-239
    • /
    • 2003
  • The On-Line Analytical Processing (OLAP) tools access data from the data warehouse for complex data analysis, such as multidimensional data analysis, and decision support activities. Current research has lead to new developments in all aspects of data warehousing, however, there are still a number of problems that need to be solved for making data warehousing effective. View maintenance, one of them, is to maintain view in response to updates in source data. Keeping the view consistent with updates to the base relations, however, can be expensive, since it may involve querying external sources where the base relations reside. In order to reduce maintenance costs, it is possible to maintain the views using information that is strictly local to the data warehouse. This process is usually referred to as "self-maintenance of views". A number of algorithm have been proposed for self maintenance of views where they keep some additional information in data warehouse in the form of auxiliary views. But those algorithms did not consider a consistency of materialized views using view self-maintenance. The purpose of this paper is to research consistency problem when self-maintenance of views is implemented. The proposed "conveyor algorithm" will resolved a complete consistency of materialized view using self-maintenance with considering network delay. The rationale for conveyor algorithm and performance characteristics are described in detail.

Bayesian smoothing under structural measurement error model with multiple covariates

  • Hwang, Jinseub;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.3
    • /
    • pp.709-720
    • /
    • 2017
  • In healthcare and medical research, many important variables have a measurement error such as body mass index and laboratory data. It is also not easy to collect samples of large size because of high cost and long time required to collect the target patient satisfied with inclusion and exclusion criteria. Beside, the demand for solving a complex scientific problem has highly increased so that a semiparametric regression approach could be of substantial value solving this problem. To address the issues of measurement error, small domain and a scientific complexity, we conduct a multivariable Bayesian smoothing under structural measurement error covariate in this article. Specifically we enhance our previous model by incorporating other useful auxiliary covariates free of measurement error. For the regression spline, we use a radial basis functions with fixed knots for the measurement error covariate. We organize a fully Bayesian approach to fit the model and estimate parameters using Markov chain Monte Carlo. Simulation results represent that the method performs well. We illustrate the results using a national survey data for application.

The future of bioinformntics

  • Gribskov, Michael
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.1-1
    • /
    • 2003
  • It is clear that computers will play a key role in the biology of the future. Even now, it is virtually impossible to keep track of the key proteins, their names and associated gene names, physical constants(e.g. binding constants, reaction constants, etc.), and hewn physical and genetic interactions without computational assistance. In this sense, computers act as an auxiliary brain, allowing one to keep track of thousands of complex molecules and their interactions. With the advent of gene expression array technology, many experiments are simply impossible without this computer assistance. In the future, as we seek to integrate the reductionist description of life provided by genomic sequencing into complex and sophisticated models of living systems, computers will play an increasingly important role in both analyzing data and generating experimentally testable hypotheses. The future of bioinformatics is thus being driven by potent technological and scientific forces. On the technological side, new experimental technologies such as microarrays, protein arrays, high-throughput expression and three-dimensional structure determination prove rapidly increasing amounts of detailed experimental information on a genomic scale. On the computational side, faster computers, ubiquitous computing systems, high-speed networks provide a powerful but rapidly changing environment of potentially immense power. The challenges we face are enormous: How do we create stable data resources when both the science and computational technology change rapidly? How do integrate and synthesize information from many disparate subdisciplines, each with their own vocabulary and viewpoint? How do we 'liberate' the scientific literature so that it can be incorporated into electronic resources? How do we take advantage of advances in computing and networking to build the international infrastructure needed to support a complete understanding of biological systems. The seeds to the solutions of these problems exist, at least partially, today. These solutions emphasize ubiquitous high-speed computation, database interoperation, federation, and integration, and the development of research networks that capture scientific knowledge rather than just the ABCs of genomic sequence. 1 will discuss a number of these solutions, with examples from existing resources, as well as area where solutions do not currently exist with a view to defining what bioinformatics and biology will look like in the future.

  • PDF

Search for [NiFe]-Hydrogenase using Degenerate Polymerase Chain Reaction (Degenerate Polymerase Chain Reaction을 통한 [NiFe]-Hydrogenase의 탐색)

  • Jung, Hee-Jung;Kim, Jaoon Y.H.;Cha Hyung-Joon
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.11a
    • /
    • pp.631-633
    • /
    • 2005
  • For biohydrogen production, hydrogenase is a key enzyme. In the present work we performed search of [NiFe]-hydrogenases from hydrogen producing microorganisms using degenerate polymerase chain reaction (PCR) strategy. Degenerate primers were designed from the conserved region of [NiFe]-hydrogenase group I especially on structural genes encoding for catalytic subunit of [NiFe]-hydrogenase from bacteria producing hydrogen. Most of [NiFe]-hydrogenase (group I) are expressed via complex mechanism with aid of auxiliary protein and localized through twin-arginine translocation pathway. [NiFe]-hydrogenase is composed of large and small subunits for catalytic activity. It is known that only small subunit has signal peptide for periplasmic localization and large & small subunitscome together before localization. During this process, large subunit is treated by endopeptidase for maturation. Based on these information we used signal peptide sequence and C-terminal of large subunit by recognized by endopeptidase as templates for degenerate primers. About 2,900 bp of PCR products were successfully amplified using the designed degenerate primers from genomic DNAs of several microorganisms. The amplified PCR products were inserted into T-vector and then sequenced to confirm.

  • PDF