• 제목/요약/키워드: complex auxiliary information

검색결과 18건 처리시간 0.277초

통합 칼리브레이션 가중치 산출 비교연구 (Integrated calibration weighting using complex auxiliary information)

  • 박인호;김수진
    • 응용통계연구
    • /
    • 제34권3호
    • /
    • pp.427-438
    • /
    • 2021
  • 이단추출은 개체와 집락 단수준별 모집단 특성을 함께 추정할 수 있게 해준다. 단위수준별 보조정보가 함께 주어질 때, 단위수준별 정보 및 가중치 구성을 통합적으로 고려한 칼리브레이션 가중치를 산출한다면 단위수준별 특성은 물론 수준간의 다변량적 특성도 적절히 반영할 수 있을 것이다. 본 연구는 Estevao와 Särndal (2006)과 Kim (2019)이 고려한 통합 칼리브레이션 가중치 산출 방법에 대해 살펴보았다. 간단한 모의실험을 통해 기존의 통합 칼리브레이션 가중치 산출방법의 효율성을 비교하였다. 이 중 복합보조정보를 개체화한 후 단일단계의 칼리브리이션 조정으로 가중치를 산출하되 집락가중치가 집락 내 개체가중치 평균이 되도록 정의하는 방법과 단위수준별 보조정보를 이용한 수준별 칼리브레이션 조정을 상호 반복적으로 수행하되 집락가중치가 집락 내 개체가중도치 평균이 되도록 하는 방법이 조정전 가중치의 변동량을 크게 늘리지 않고도 수준간 다변량적 특성을 잘 반영할 수 있음을 확인할 수 있었다. 집락과 개체의 상호간 보조정보에 대한 총합추정의 적합도 측면에서 매우 양호하였고, 칼리브레이션 조정에 포함되지 않는 조사특성들의 총합추정에 대한 상대편향 및 상대 평균 제곱근 오차가 작게 나타났다.

백색 보조 띠 기반의 정밀 스테레오 정합 기법 (Precise Stereo Matching Technique Based on White Auxiliary Stripe)

  • 강한솔;고윤호
    • 한국멀티미디어학회논문지
    • /
    • 제22권12호
    • /
    • pp.1356-1367
    • /
    • 2019
  • This paper proposes a novel active stereo matching technique using white auxiliary stripe pattern. The conventional active stereo matching techniques that uses two cameras and an active source such as projector can accurately estimate disparity information even in the areas with low texture compared to the passive ones. However, it is difficult that the conventional active stereo matching techniques using color code patterns acquire these patterns robustly if the object is composed of various colors or is exposed to complex lighting condition. To overcome this problem, the proposed method uses an additional white auxiliary stripe pattern to get and localize the color code patterns robustly. This paper proposes a process based on adaptive thresholding and thinning to obtain the auxiliary pattern accurately. Experimental results show that the proposed method more precisely measures the stepped sample whose depth is known in advance than the conventional method.

구문 제약으로 문형을 사용하는 CFG기반의 한국어 파싱 (CFG based Korean Parsing Using Sentence Patterns as Syntactic Constraint)

  • 박인철
    • 한국산학기술학회논문지
    • /
    • 제9권4호
    • /
    • pp.958-963
    • /
    • 2008
  • 한국어는 용언이 의미적 제약을 통해 문장을 지배하며 대부분의 한국어 문장은 주절과 내포문을 가지는 복문으로 구성되어 있다. 따라서 한국어에 맞는 구문 문법이나 구문 제약을 기술하는 것은 매우 어렵고 한국어를 파싱 하면 다양한 구문 모호성이 발생한다. 본 논문에서는 구문 제약으로 문형(sentence patterns)을 사용하는 CFG기반의 문법을 기술하여 구문 모호성을 해결하는 방법을 제안한다. 이를 위해 내포문을 포함하는 복문도 문형으로 분류하였으며 44개의 문형을 사용한다. 그러나 한국어 특성상 문형 정보만으로는 모든 구문 모호성을 해결할 수가 없기 때문에 문형에 의미 제약(semantic constraint)을 가한 의미 지표(semantic marker)를 사용하여 파싱을 수행한다. 의미 지표는 보조사의 처리나 공동격 조사에 의해 발생되는 구문 모호성을 해결하는데 이용될 수 있다.

분산처리 최적조류계산 기반 연계계통 급전계획 알고리즘 개발 (A New Dispatch Scheduling Algorithm Applicable to Interconnected Regional Systems with Distributed Inter-temporal Optimal Power Flow)

  • 정구형;강동주;김발호
    • 전기학회논문지
    • /
    • 제56권10호
    • /
    • pp.1721-1730
    • /
    • 2007
  • SThis paper proposes a new dispatch scheduling algorithm in interconnected regional system operations. The dispatch scheduling formulated as mixed integer non-linear programming (MINLP) problem can efficiently be computed by generalized Benders decomposition (GBD) algorithm. GBD guarantees adequate computation speed and solution convergency since it decomposes a primal problem into a master problem and subproblems for simplicity. In addition, the inter-temporal optimal power flow (OPF) subproblem of the dispatch scheduling problem is comprised of various variables and constraints considering time-continuity and it makes the inter-temporal OPF complex due to increased dimensions of the optimization problem. In this paper, regional decomposition technique based on auxiliary problem principle (APP) algorithm is introduced to obtain efficient inter-temporal OPF solution through the parallel implementation. In addition, it can find the most economic dispatch schedule incorporating power transaction without private information open. Therefore, it can be expanded as an efficient dispatch scheduling model for interconnected system operation.

유전알고리즘과 퍼지추론시스템의 합성을 이용한 정수처리공정의 약품주입률 결정 (Determination of dosing rate for water treatment using fusion of genetic algorithms and fuzzy inference system)

  • 김용열;강이석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.952-955
    • /
    • 1996
  • It is difficult to determine the feeding rate of coagulant in water treatment process, due to nonlinearity, multivariables and slow response characteristics etc. To deal with this difficulty, the fusion of genetic algorithms and fuzzy inference system was used in determining of feeding rate of coagulant. The genetic algorithms are excellently robust in complex operation problems, since it uses randomized operators and searches for the best chromosome without auxiliary information from a population consists of codings of parameter set. To apply this algorithms, we made the look up table and membership function from the actual operation data of water treatment process. We determined optimum dosages of coagulant (PAC, LAS etc.) by the fuzzy operation, and compared it with the feeding rate of the actual operation data.

  • PDF

초분광영상의 조명효과 보정 전처리기법 분석 (Analyzing Preprocessing for Correcting Lighting Effects in Hyperspectral Images)

  • 송영선
    • 한국산업융합학회 논문집
    • /
    • 제26권5호
    • /
    • pp.785-792
    • /
    • 2023
  • Because hyperspectral imaging provides detailed spectral information across a broad range of wavelengths, it can be utilized in numerous applications, including environmental monitoring, food quality inspection, medical diagnosis, material identification, art authentication, and crime scene analysis. However, hyperspectral images often contain various types of distortions due to the environmental conditions during image acquisition, which necessitates the proper removal of these distortions through a data preprocessing process. In this study, a preprocessing method was investigated to effectively correct the distortion caused by artificial light sources used in indoor hyperspectral imaging. For this purpose, a halogen-tungsten artificial light source was installed indoors, and hyperspectral images were acquired. The acquired images were then corrected for distortion using a preprocessing that does not require complex auxiliary equipment. After the corrections were made, the results were analyzed. According to the analysis, a statistical transformation technique using mean and standard deviation with reference to a reference signal was found to be the most effective in correcting distortions caused by artificial light sources.

실체 뷰의 자기관리에서 완전일관성을 위한 컨베이어 알고리듬 (A Conveyor Algorithm for Complete Consistency of Materialized View in a Self-Maintenance)

  • 홍인훈;김연수
    • 산업공학
    • /
    • 제16권2호
    • /
    • pp.229-239
    • /
    • 2003
  • The On-Line Analytical Processing (OLAP) tools access data from the data warehouse for complex data analysis, such as multidimensional data analysis, and decision support activities. Current research has lead to new developments in all aspects of data warehousing, however, there are still a number of problems that need to be solved for making data warehousing effective. View maintenance, one of them, is to maintain view in response to updates in source data. Keeping the view consistent with updates to the base relations, however, can be expensive, since it may involve querying external sources where the base relations reside. In order to reduce maintenance costs, it is possible to maintain the views using information that is strictly local to the data warehouse. This process is usually referred to as "self-maintenance of views". A number of algorithm have been proposed for self maintenance of views where they keep some additional information in data warehouse in the form of auxiliary views. But those algorithms did not consider a consistency of materialized views using view self-maintenance. The purpose of this paper is to research consistency problem when self-maintenance of views is implemented. The proposed "conveyor algorithm" will resolved a complete consistency of materialized view using self-maintenance with considering network delay. The rationale for conveyor algorithm and performance characteristics are described in detail.

Bayesian smoothing under structural measurement error model with multiple covariates

  • Hwang, Jinseub;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권3호
    • /
    • pp.709-720
    • /
    • 2017
  • In healthcare and medical research, many important variables have a measurement error such as body mass index and laboratory data. It is also not easy to collect samples of large size because of high cost and long time required to collect the target patient satisfied with inclusion and exclusion criteria. Beside, the demand for solving a complex scientific problem has highly increased so that a semiparametric regression approach could be of substantial value solving this problem. To address the issues of measurement error, small domain and a scientific complexity, we conduct a multivariable Bayesian smoothing under structural measurement error covariate in this article. Specifically we enhance our previous model by incorporating other useful auxiliary covariates free of measurement error. For the regression spline, we use a radial basis functions with fixed knots for the measurement error covariate. We organize a fully Bayesian approach to fit the model and estimate parameters using Markov chain Monte Carlo. Simulation results represent that the method performs well. We illustrate the results using a national survey data for application.

The future of bioinformntics

  • Gribskov, Michael
    • 한국생물정보학회:학술대회논문집
    • /
    • 한국생물정보시스템생물학회 2003년도 제2차 연례학술대회 발표논문집
    • /
    • pp.1-1
    • /
    • 2003
  • It is clear that computers will play a key role in the biology of the future. Even now, it is virtually impossible to keep track of the key proteins, their names and associated gene names, physical constants(e.g. binding constants, reaction constants, etc.), and hewn physical and genetic interactions without computational assistance. In this sense, computers act as an auxiliary brain, allowing one to keep track of thousands of complex molecules and their interactions. With the advent of gene expression array technology, many experiments are simply impossible without this computer assistance. In the future, as we seek to integrate the reductionist description of life provided by genomic sequencing into complex and sophisticated models of living systems, computers will play an increasingly important role in both analyzing data and generating experimentally testable hypotheses. The future of bioinformatics is thus being driven by potent technological and scientific forces. On the technological side, new experimental technologies such as microarrays, protein arrays, high-throughput expression and three-dimensional structure determination prove rapidly increasing amounts of detailed experimental information on a genomic scale. On the computational side, faster computers, ubiquitous computing systems, high-speed networks provide a powerful but rapidly changing environment of potentially immense power. The challenges we face are enormous: How do we create stable data resources when both the science and computational technology change rapidly? How do integrate and synthesize information from many disparate subdisciplines, each with their own vocabulary and viewpoint? How do we 'liberate' the scientific literature so that it can be incorporated into electronic resources? How do we take advantage of advances in computing and networking to build the international infrastructure needed to support a complete understanding of biological systems. The seeds to the solutions of these problems exist, at least partially, today. These solutions emphasize ubiquitous high-speed computation, database interoperation, federation, and integration, and the development of research networks that capture scientific knowledge rather than just the ABCs of genomic sequence. 1 will discuss a number of these solutions, with examples from existing resources, as well as area where solutions do not currently exist with a view to defining what bioinformatics and biology will look like in the future.

  • PDF

Degenerate Polymerase Chain Reaction을 통한 [NiFe]-Hydrogenase의 탐색 (Search for [NiFe]-Hydrogenase using Degenerate Polymerase Chain Reaction)

  • 정희정;김영환;차형준
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 한국신재생에너지학회 2005년도 제17회 워크샵 및 추계학술대회
    • /
    • pp.631-633
    • /
    • 2005
  • For biohydrogen production, hydrogenase is a key enzyme. In the present work we performed search of [NiFe]-hydrogenases from hydrogen producing microorganisms using degenerate polymerase chain reaction (PCR) strategy. Degenerate primers were designed from the conserved region of [NiFe]-hydrogenase group I especially on structural genes encoding for catalytic subunit of [NiFe]-hydrogenase from bacteria producing hydrogen. Most of [NiFe]-hydrogenase (group I) are expressed via complex mechanism with aid of auxiliary protein and localized through twin-arginine translocation pathway. [NiFe]-hydrogenase is composed of large and small subunits for catalytic activity. It is known that only small subunit has signal peptide for periplasmic localization and large & small subunitscome together before localization. During this process, large subunit is treated by endopeptidase for maturation. Based on these information we used signal peptide sequence and C-terminal of large subunit by recognized by endopeptidase as templates for degenerate primers. About 2,900 bp of PCR products were successfully amplified using the designed degenerate primers from genomic DNAs of several microorganisms. The amplified PCR products were inserted into T-vector and then sequenced to confirm.

  • PDF