• Title/Summary/Keyword: 시간복잡도

Search Result 3,677, Processing Time 0.038 seconds

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

Closed Integral Form Expansion for the Highly Efficient Analysis of Fiber Raman Amplifier (라만증폭기의 효율적인 성능분석을 위한 라만방정식의 적분형 전개와 수치해석 알고리즘)

  • Choi, Lark-Kwon;Park, Jae-Hyoung;Kim, Pil-Han;Park, Jong-Han;Park, Nam-Kyoo
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • The fiber Raman amplifier(FRA) is a distinctly advantageous technology. Due to its wider, flexible gain bandwidth, and intrinsically lower noise characteristics, FRA has become an indispensable technology of today. Various FRA modeling methods, with different levels of convergence speed and accuracy, have been proposed in order to gain valuable insights for the FRA dynamics and optimum design before real implementation. Still, all these approaches share the common platform of coupled ordinary differential equations(ODE) for the Raman equation set that must be solved along the long length of fiber propagation axis. The ODE platform has classically set the bar for achievable convergence speed, resulting exhaustive calculation efforts. In this work, we propose an alternative, highly efficient framework for FRA analysis. In treating the Raman gain as the perturbation factor in an adiabatic process, we achieved implementation of the algorithm by deriving a recursive relation for the integrals of power inside fiber with the effective length and by constructing a matrix formalism for the solution of the given FRA problem. Finally, by adiabatically turning on the Raman process in the fiber as increasing the order of iterations, the FRA solution can be obtained along the iteration axis for the whole length of fiber rather than along the fiber propagation axis, enabling faster convergence speed, at the equivalent accuracy achievable with the methods based on coupled ODEs. Performance comparison in all co-, counter-, bi-directionally pumped multi-channel FRA shows more than 102 times faster with the convergence speed of the Average power method at the same level of accuracy(relative deviation < 0.03dB).

The development of anti-DR4 single-chain Fv (ScFv) antibody fused to Escherichia coli alkaline phosphatase (대장균의 alkaline phosphatase가 융합된 anti-DR4 single-chain Fv (ScFv) 항체의 개발)

  • Han, Seung Hee;Kim, Jin-Kyoo
    • Korean Journal of Microbiology
    • /
    • v.52 no.1
    • /
    • pp.10-17
    • /
    • 2016
  • Enzyme immunoassay to analyze specific binding activity of antibody to antigen uses horseradish peroxidase (HRP) or alkaline phosphatase (AP). Chemical methods are usually used for coupling of these enzymes to antibody, which is complicated and random cross-linking process. As results, it causes decreases or loss of functional activity of either antibody or enzyme. In addition, most enzyme assays use secondary antibody to detect antigen binding activity of primary antibody. Enzymes coupled to secondary antibody provide a binding signal by substrate-based color development, suggesting secondary antibody is required in enzyme immunoassay. Additional incubation time for binding of secondary antibody should also be necessary. More importantly, non-specific binding activity caused by secondary antibody should also be eliminated. In this study, we cloned AP isolated from Escherichia coli (E. coli) chromosome by PCR and fused to) hAY4 single-chain variable domain fragment (ScFv) specific to death receptor (DR4) which is a receptor for tumor necrosis factor ${\alpha}$ related apoptosis induced ligand (TRAIL). hAY4 ScFv-AP expressed in E. coli showed 73.8 kDa as a monomer in SDS-PAGE. However, this fusion protein shown in size-exclusion chromatography (SEC) exhibited 147.6 kDa as a dimer confirming that natural dimerization of AP by non-covalent association induced ScFv-AP dimerization. In several immunoassay such as ELISA, Western blot and immunocytochemistry, it showed antigen binding activity by color development of substrates catalyzed by AP directly fused to primary hAY4 ScFv without secondary antibody. In summary, hAY4 ScFv-AP fusion protein was successfully purified as a soluble dimeric form in E. coli and showed antigen binding activity in several immunoassays without addition of secondary antibody which sometimes causes time-consuming, expensive and non-specific false binding.

On-Line Determination Steady State in Simulation Output (시뮬레이션 출력의 안정상태 온라인 결정에 관한 연구)

  • 이영해;정창식;경규형
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1996.05a
    • /
    • pp.1-3
    • /
    • 1996
  • 시뮬레이션 기법을 이용한 시스템의 분석에 있어서 실험의 자동화는 현재 많은 연구와 개발이 진행 중인 분야이다. 컴퓨터와 정보통신 시스템에 대한 시뮬레이션의 예를 들어 보면, 수많은 모델을 대한 시뮬레이션을 수행할 경우 자동화된 실험의 제어가 요구되고 있다. 시뮬레이션 수행회수, 수행길이, 데이터 수집방법 등과 관련하여 시뮬레이션 실험방법이 자동화가 되지 않으면, 시뮬레이션 실험에 필요한 시간과 인적 자원이 상당히 커지게 되며 출력데이터에 대한 분석에 있어서도 어려움이 따르게 된다. 시뮬레이션 실험방법을 자동화하면서 효율적인 시뮬레이션 출력분석을 위해서는 시뮬레이션을 수행하는 경우에 항상 발생하는 초기편의 (initial bias)를 제거하는 문제가 선결되어야 한다. 시뮬레이션 출력분석에 사용되는 데이터들이 초기편의를 반영하지 않는 안정상태에서 수집된 것이어야만 실제 시스템에 대한 올바른 해석이 가능하다. 실제로 시뮬레이션 출력분석과 관련하여 가장 중요하면서도 어려운 문제는 시뮬레이션의 출력데이터가 이루는 추계적 과정 (stochastic process)의 안정상태 평균과 이 평균에 대한 신뢰구간(confidence interval: c. i.)을 구하는 것이다. 한 신뢰구간에 포함되어 있는 정보는 의사결정자에게 얼마나 정확하게 평균을 추정할 구 있는지 알려 준다. 그러나, 신뢰구간을 구성하는 일은 하나의 시뮬레이션으로부터 얻어진 출력데이터가 일반적으로 비정체상태(nonstationary)이고 자동상관(autocorrelated)되어 있기 때문에, 전통적인 통계적인 기법을 직접적으로 이용할 수 없다. 이러한 문제를 해결하기 위해 시뮬레이션 출력데이터 분석기법이 사용된다.본 논문에서는 초기편의를 제거하기 위해서 필요한 출력데이터의 제거시점을 찾는 새로운 기법으로, 유클리드 거리(Euclidean distance: ED)를 이용한 방법과 현재 패턴 분류(pattern classification) 문제에 널리 사용 중인 역전파 신경망(backpropagation neural networks: BNN) 알고리듬을 이용하는 방법을 제시한다. 이 기법들은 대다수의 기존의 기법과는 달리 시험수행(pilot run)이 필요 없으며, 시뮬레이션의 단일수행(single run) 중에 제거시점을 결정할 수 있다. 제거시점과 관련된 기존 연구는 다음과 같다. 콘웨이방법은 현재의 데이터가 이후 데이터의 최대값이나 최소값이 아니면 이 데이터를 제거시점으로 결정하는데, 알고기듬 구조상 온라인으로 제거시점 결정이 불가능하다. 콘웨이방법이 알고리듬의 성격상 온라인이 불가능한 반면, 수정콘웨이방법 (Modified Conway Rule: MCR)은 현재의 데이터가 이전 데이터와 비교했을 때 최대값이나 최소값이 아닌 경우 현재의 데이터를 제거시점으로 결정하기 때문에 온라인이 가능하다. 평균교차방법(Crossings-of-the-Mean Rule: CMR)은 누적평균을 이용하면서 이 평균을 중심으로 관측치가 위에서 아래로, 또는 아래서 위로 교차하는 회수로 결정한다. 이 기법을 사용하려면 교차회수를 결정해야 하는데, 일반적으로 결정된 교차회수가 시스템에 상관없이 일반적으로 적용가능하지 않다는 문제점이 있다. 누적평균방법(Cumulative-Mean Rule: CMR2)은 여러 번의 시험수행을 통해서 얻어진 출력데이터에 대한 총누적평균(grand cumulative mean)을 그래프로 그린 다음, 안정상태인 점을 육안으로 결정한다. 이 방법은 여러 번의 시뮬레이션을 수행에서 얻어진 데이터들의 평균들에 대한 누적평균을 사용하기 매문에 온라인 제거시점 결정이 불가능하며, 작업자가 그래프를 보고 임의로 결정해야 하는 단점이 있다. Welch방법(Welch's Method: WM)은 브라운 브리지(Brownian bridge) 통계량()을 사용하는데, n이 무한에 가까워질 때, 이 브라운 브리지 분포(Brownian bridge distribution)에 수렴하는 성질을 이용한다. 시뮬레이션 출력데이터를 가지고 배치를 구성한 후 하나의 배치를 표본으로 사용한다. 이 기법은 알고리듬이 복잡하고, 값을 추정해야 하는 단점이 있다. Law-Kelton방법(Law-Kelton's Method: LKM)은 회귀 (regression)이론에 기초하는데, 시뮬레이션이 종료된 후 누적평균데이터에 대해서 회귀직선을 적합(fitting)시킨다. 회귀직선의 기울기가 0이라는 귀무가설이 채택되면 그 시점을 제거시점으로 결정한다. 일단 시뮬레이션이 종료된 다음, 데이터가 모아진 순서의 반대 순서로 데이터를 이용하기 때문에 온라인이 불가능하다. Welch절차(Welch's Procedure: WP)는 5회이상의 시뮬레이션수행을 통해 수집한 데이터의 이동평균을 이용해서 시각적으로 제거시점을 결정해야 하며, 반복제거방법을 사용해야 하기 때문에 온라인 제거시점의 결정이 불가능하다. 또한, 한번에 이동할 데이터의 크기(window size)를 결정해야 한다. 지금까지 알아 본 것처럼, 기존의 방법들은 시뮬레이션의 단일 수행 중의 온라인 제거시점 결정의 관점에서는 미약한 면이 있다. 또한, 현재의 시뮬레이션 상용소프트웨어는 작업자로 하여금 제거시점을 임의로 결정하도록 하기 때문에, 실험중인 시스템에 대해서 정확하고도 정량적으로 제거시점을 결정할 수 없게 되어 있다. 사용자가 임의로 제거시점을 결정하게 되면, 초기편의 문제를 효과적으로 해결하기 어려울 뿐만 아니라, 필요 이상으로 너무 많은 양을 제거하거나 초기편의를 해결하지 못할 만큼 너무 적은 양을 제거할 가능성이 커지게 된다. 또한, 기존의 방법들의 대부분은 제거시점을 찾기 위해서 시험수행이 필요하다. 즉, 안정상태 시점만을 찾기 위한 시뮬레이션 수행이 필요하며, 이렇게 사용된 시뮬레이션은 출력분석에 사용되지 않기 때문에 시간적인 손실이 크게 된다.

  • PDF

Comparison of the Diagnostic Performance of $^{14}C$-urea Breath Test According to Counting Method for the Diagnosis of Helicobacter pylori Infection (Helicobacter pylori 감염 진단 시 $^{14}C$-요소호기검사의 계수측정 방법에 따른 진단성능 비교)

  • Kim, Min-Woo;Lim, Seok-Tae;Lee, Seung-Ok;Sohn, Myung-Hee
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.21-25
    • /
    • 2005
  • Purpose: $^{14}C$-urea breath test (UBT) is a non-invasive and reliable method for the diagnosis of Helicobacter pylori (HP) infection. In this study, we evaluated the diagnostic performance of a new and rapid $^{14}C$-UBT (Heliprobe method), which was equipped with $Geiger-M\ddot{u}ller$ counter and compared the results with those obtained by using the conventional method. Materials and Methods: Forty-nine patients with dyspepsia underwent gastroduodenoscopy and $^{14}C$-UBT. A 37 KBq $^{14}C$-urea capsule was administered to patients and breath samples were collected. In Heliprobe method, patients exhaled into a Hellprobe BreathCard for 10 min. And then the activities of the BreathCard were countered using Heliprobe analyzer. In the conventional method, results were countered using liquid scintillation counter. During gastroduodenoscopy, 18 of 49 patients were underwent biopsies. According to these histologic results, we evaluated the diagnostic performance of two different methods and compared them. Also we evaluated the concordant and disconcordant rates between them. Results: In all 49 patients, concordant rate of both conventional and Heliprobe methods was 98% (48/49) and the discordant rate was 2% (1/49). Thirteen of 18 patients to whom biopsies were applied, were found to be HP positive on histologic results. And both Heliprobe method and conventional method classified 13 of 13 HP-positive patients and 5 of 5 HP-negative patients correctly (sensitivity 100%, specificity 100%, accuracy 100%). Conclusion: The Heliprobe method demonstrated the same diagnostic performance compared with the conventional method and was a simpler and more rapid technique.

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.

A Study on the Effect of Improving Permeability by Injecting a Soil Remediation Agent in the In-situ Remediation Method Using Plasma Blasting, Pneumatic Fracturing, and Vacuum Suction Method (플라즈마 블라스팅, 공압파쇄, 진공추출이 활용된 지중 토양정화공법의 정화제 주입에 따른 투수성 개선 연구)

  • Geun-Chun Lee;Jae-Yong Song;Cha-Won Kang;Hyun-Shic Jang;Bo-An Jang;Yu-Chul Park
    • The Journal of Engineering Geology
    • /
    • v.33 no.3
    • /
    • pp.371-388
    • /
    • 2023
  • A stratum with a complex composition and a distributed low-permeability soil layer is difficult to remediate quickly because the soil remediation does not proceed easily. For efficient purification, the permeability should be improved and the soil remediation agent (H2O2) should be injected into the contaminated section to make sufficient contact with the TPH (Total petroleum hydrocarbons). This study analyzed a method for crack formation and effective delivery of the soil remediation agent based on pneumatic fracturing, plasma blasting, and vacuum suction (the PPV method) and compared its improvement effect relative to chemical oxidation. A demonstration test confirmed the effective delivery of the soil remediation agent to a site contaminated with TPH. The injection amount and injection time were monitored to calculate the delivery characteristics and the range of influence, and electrical resistivity surveying qualitatively confirmed changes in the underground environment. Permeability tests also evaluated and compared the permeability changes for each method. The amount of soil remediation agent injected was increased by about 4.74 to 7.48 times in the experimental group (PPV method) compared with the control group (chemical oxidation); the PPV method allowed injection rates per unit time (L/min) about 5.00 to 7.54 times quicker than the control method. Electrical resistivity measurements assessed that in the PPV method, the diffusion of H2O22 and other fluids to the surface soil layer reduced the low resistivity change ratio: the horizontal change ratio between the injection well and the extraction well decreased the resistivity by about 1.12 to 2.38 times. Quantitative evaluation of hydraulic conductivity at the end of the test found that the control group had 21.1% of the original hydraulic conductivity and the experimental group retained 81.3% of the initial value, close to the initial permeability coefficient. Calculated radii of influence based on the survey results showed that the results of the PPV method were improved by 220% on average compared with those of the control group.

Relationship Between Seasonal Dynamics of Zooplankton Community and Diversity in Small Reservoir Focusing on Occurrence Pattern (출현 양상 기반 소형호 내 동물플랑크톤 군집의 계절 변동과 다양성 관계)

  • Geun-Hyeok Hong;Hye-ji Oh;Yerim Choi;Jun-Wan Kim;Beom-Myeong Choi;KwangHyeon Chang;Min-Ho Jang
    • Korean Journal of Ecology and Environment
    • /
    • v.56 no.2
    • /
    • pp.172-186
    • /
    • 2023
  • Small ponds, which exhibit unstable succession pattern of plankton community, are less well studied than large lakes. Recently, the importance of small ponds for local biodiversity conservation has highlighted the necessity of understanding the dynamics of biological community. In the present study, we collected zooplankton from three small reservoirs with monthly basis and analyzed their seasonal dynamics. To understand the complicated zooplankton community dynamics of small reservoirs, we categorized zooplankton species into four groups (LALF Group, Low Abundance Low Frequency; LAHF Group, Low Abundance High Frequency; HALF Group, High Abundance Low Frequency; HAHF Group, High Abundance High Frequency) based on their occurrence pattern (abundance and frequency). We compared the seasonal pattern of each group, and estimated community diversity based on temporal beta diversity contribution of each group. The result revealed that there is a relationship between groups with the same abundance but different occurrence frequencies, and copepod nauplii are common important component for both abundance and frequency. On the other hand, species included with LALF Group throughout the study period are key in terms of monthly succession and diversity. LALF Group includes Anuraeopsis fissa, Hexarthra mira and Lecane luna. However, groups containing species that only occur at certain times of the year and dominate the waterbody, HALF Group, hindered to temporal diversity. The results of this study suggest that the species-specific occurrence pattern is one key trait of species determining its contribution to total annual biodiversity of given community.

A Study on Domestic Applicability for the Korean Cosmic-Ray Soil Moisture Observing System (한국형 코즈믹 레이 토양수분 관측 시스템을 위한 국내 적용성 연구)

  • Jaehwan Jeong;Seongkeun Cho;Seulchan Lee;Kiyoung Kim;Yongjun Lee;Chung Dae Lee;Sinjae Lee;Minha Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.233-246
    • /
    • 2023
  • In terms of understanding the water cycle and efficient water resource management, the importance of soil moisture has been highlighted. However, in Korea, the lack of qualified in-situ soil moisture data results in very limited utility. Even if satellite-based data are applied, the absence of ground reference data makes objective evaluation and correction difficult. The cosmic-ray neutron probe (CRNP) can play a key role in producing data for satellite data calibration. The installation of CRNP is non-invasive, minimizing damage to the soil and vegetation environment, and has the advantage of having a spatial representative for the intermediate scale. These characteristics are advantageous to establish an observation network in Korea which has lots of mountainous areas with dense vegetation. Therefore, this study was conducted to evaluate the applicability of the CRNP soil moisture observatory in Korea as part of the establishment of a Korean cOsmic-ray Soil Moisture Observing System (KOSMOS). The CRNP observation station was installed with the Gunup-ri observation station, considering the ease of securing power and installation sites and the efficient use of other hydro-meteorological factors. In order to evaluate the CRNP soil moisture data, 12 additional in-situ soil moisture sensors were installed, and spatial representativeness was evaluated through a temporal stability analysis. The neutrons generated by CRNP were found to be about 1,087 counts per hour on average, which was lower than that of the Solmacheon observation station, indicating that the Hongcheon observation station has a more humid environment. Soil moisture was estimated through neutron correction and early-stage calibration of the observed neutron data. The CRNP soil moisture data showed a high correlation with r=0.82 and high accuracy with root mean square error=0.02 m3/m3 in validation with in-situ data, even in a short calibration period. It is expected that higher quality soil moisture data production with greater accuracy will be possible after recalibration with the accumulation of annual data reflecting seasonal patterns. These results, together with previous studies that verified the excellence of CRNP soil moisture data, suggest that high-quality soil moisture data can be produced when constructing KOSMOS.

A study of analytical method for Benzo[a]pyrene in edible oils (식용유지 중 벤조피렌 분석법 비교 연구)

  • Min-Jeong Kim;jun-Young Park;Min-Ju Kim;Eun-Young Jo;Mi-Young Park;Nan-Sook Han;Sook-Nam Hwang
    • Analytical Science and Technology
    • /
    • v.36 no.6
    • /
    • pp.291-299
    • /
    • 2023
  • The benzo[a]pyrene in edible oils is extracted using methods such as Liquid-liquid, soxhlet and ultrasound-assisted extraction. However these extraction methods have significant drawbacks, such as long extraction time and large amount of solvent usage. To overcome these drawbacks, this study attempted to improve the current complex benzo[a]pyrene analysis method by applying the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method that can be analyzed in a simple and short time. The QuEChERS method applied in this study includes extraction of benzo[a]pyrene into n-hexane saturated acetonitrile and n-hexane. After extraction and distribution using magnesium sulfate and sodium chloride, benzo[a]pyrene is analyzed by liquid chromatography with fluorescence detector (LC/FLR). As a result of method validation of the new method, the limit of detection (LOD) and quantification (LOQ) were 0.02 ㎍/kg and 0.05 ㎍/kg, respectively. The calibration curves were constructed using five levels (0.1~10 ㎍/kg) and coefficient (R2) was above 0.99. Mean recovery ratio was ranged from 74.5 to 79.3 % with a relative standard deviation (RSD) between 0.52 to 1.58 %. The accuracy and precision were 72.6~79.4 % and 0.14~7.20 %, respectively. All results satisfied the criteria ranges requested in the Food Safety Evaluation Department guidelines (2016) and AOAC official method of analysis (2023). Therefore, the analysis method presented in this study was a relatively simple pretreatment method compared to the existing analysis method, which reduced the analysis time and solvent use to 92 % and 96 %, respectively.