• Title/Summary/Keyword: N2 method

Search Result 9,623, Processing Time 0.047 seconds

Volatile Organic Compounds(VOCs) Sensing Properties of Thin Films Based on Copper phthalocyanine and Dilithium phthalocyanine Compounds (Copper phthalocyanine과 Dilithium phthalocyanine 화합물 박막의 휘발성 유기화합물(VOCs) 센서 특성)

  • Kim, Dong Hyun;Kang, Young Goo;Kang, Young Jin
    • Journal of the Korean Society of Safety
    • /
    • v.28 no.2
    • /
    • pp.37-41
    • /
    • 2013
  • In this work, we report the effect on the volatole organic compounds(VOCs) sensing properties of Copper phthalocyanine(CoPc) and Dilithium phthalocyanine(DiLiPc) thin films onto alumina substrates. Use evaporation method and the spin-coated method for sensing device. The materials of metallophthalocyanine macrocyclic compound solutions blended with N,N'-diphenyl-N,N'-bis(1-naphthyl)-1,1'-biphenyl-4,4"-diamine and/or Poly[2-methoxy-5-(2'-ethylhexyloxy)-1,4-phenylenevinylene] solutions. The influence of the blended in with metallophthalocyanine macrocyclic compounds on the resistance have been measured and analyzed in five different volatole organic compounds. The following results were obtained: The AFM 3D image of thin films deposited on metallophthalocyanine macrocyclic compound shows that the surfaces roughness were about CuPc 4.1~14.3 nm(7.5~8.1%), DiLiPc 10.3~22.2 nm(7.9~11.5%). The resistances decreases upon increasing the concentration of vapor organic compounds to CuPc and DiLiPc thin films. That thin films blended Copper phthalocyanine(CoPc) and Dilithium phthalocyanine(DiLiPc) with N,N'-diphenyl-N,N'-bis(1-naphthyl)-1,1'-biphenyl-4,4"-diamine and/or Poly[2-methoxy--5-(2'-ethylhexyloxy)-1,4-phenylenevinylene]. The resistances of blended thin films with N,N'-diphenyl-N,N'-bis(1-naphthyl)-1,1'-biphenyl-4,4"-diamine and/or Poly[2-methoxy--5-(2'-ethylhexyloxy)-1,4-phenylenevinylene] decreases upon increasing the concentration of volatole organic compounds(VOCs) on DiLiPc than CuPc compound thin films.

A Method for Determination of Nitrogen in Ruminant Feedstuffs and Products

  • Islam, M.R.;Ishida, M.;Ando, S.;Nishida, T.;Yamada, T.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.16 no.10
    • /
    • pp.1438-1442
    • /
    • 2003
  • A method for the determination of nitrogen in ruminant feedstuffs, products and excreta (e.g. milk and urine) using a spectrophotometer is developed, where samples processed for P determination are also used to determine N. Samples are digested with sulphuric acid and subsequently with hydrogen peroxide in Kjeldahl tubes. Digested solutions along with phenol and buffered alkaline hypochlorite reagents are incubated in a water bath at $37^{\circ}C$ for 30 min and presented in the spectrophotometer. The spectrophotometer set at 625 nm measures the concentration of N of each sample. Nitrogen in 261 of the samples was also determined by the classical Kjeldahl method in order to develop a relationship between N determined by the Kjeldahl method (Y) and the colorimetric method (X). The mean value of Y was as high as that of X (0.92 vs. 0.96; p>0.05). The colorimetric method predicted Kjeldahl N highly significantly (Y=0.985X-0.024, $R^2=0.993$, p<0.001; or more simply Y=0.974X, $R^2=0.993$, p<0.001). An analysis of regression found no difference (p>0.05; both t-test and F-test) between colorimetric (0.96% N) and adjusted (0.96% N) N. In comparison with the Kjeldahl method, the analytical capacity of N by colorimetric method increases greatly, where 200-300 determinations of N are possible in a working day. In addition, the system provides an opportunity to use not only the same digested solution for both N and P determination of a particular sample, but also uses the same spectrophotometer to assay both N and P. Therefore, the system may be attractive in situations where both elements of a sample are to be determined. In conclusion, the speed of N determination, low cost, efficient use of labour, time and reagents, fewer items of equipment, and the reduction of environmental pollution by reducing effluent and toxic elements are the advantages of this method of N determination.

Optimum Design of a Perpendicular Permanent Magnet Double-sided Linear Synchronous Motor using Response Surface Method (반응표면법을 이용한 수직배열형 양측식 영구자석 선형 동기전동기의 최적설계)

  • Kim, Chang-Eob
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.30 no.2
    • /
    • pp.26-30
    • /
    • 2016
  • This paper presented an optimum design of a perpendicular PMDSLSM (Permanent Magnet Double-sided Linear Synchronous Motor) to minimize the detent force. As an optimum method, the response surface method was used and 3D finite element method for the calculation. The design variables of the machine were the primary core width and thickness, and magnet width, thickness and length. Object functions were to minimize the detent force and maximize the thrust of the basic model. The results showed that the thrust force of the optimum design increased from 82.1N to 90.2N and detent force decreased from 15.2N to 2.8N, respectively, compared to the basic model.

ON THE HIGH-ORDER CONVERGENCE OF THE k-FOLD PSEUDO-CAUCHY'S METHOD FOR A SIMPLE ROOT

  • Kim, Young Ik
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.21 no.1
    • /
    • pp.107-116
    • /
    • 2008
  • In this study the k-fold pseudo-Cauchy's method of order k+3 is proposed from the classical Cauchy's method defined by an iteration $x_{n+1}=x_n-{\frac{f^{\prime}(x_n)}{f^{{\prime}{\prime}}(x_n)}}{\cdot}(1-{\sqrt{1-2f(x_n)f^{{\prime}{\prime}}(x_n)/f^{\prime}(x_n)^2}})$. The convergence behavior of the asymptotic error constant is investigated near the corresponding simple zero. A root-finding algorithm with the k-fold pseudo-Cauchy's method is described and computational examples have successfully confirmed the current analysis.

  • PDF

Optimal Test Function Petrov-Galerkin Method (최적시행함수 Petrov-Galerkin 방법)

  • Sung-Uk Choi
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.5
    • /
    • pp.599-612
    • /
    • 1998
  • Numerical analysis of convection-dominated transport problems are challenging because of dual characteristics of the governing equation. In the finite element method, a strategy is to modify the test function to weight more in the upwind direction. This is called as the Petrov-Galerkin method. In this paper, both N+1 and N+2 Petrov-Galerkin methods are applied to transport problems at high grid Peclet number. Frequency fitting algorithm is used to obtain optimal levels of N+2 upwinding, and the results are discussed. Also, a new Petrov-Galerkin method, named as "Optimal Test Function Petrov-Galerkin Method," is proposed in this paper. The test function of this numerical method changes its shape depending upon relative strength of the convection to the diffusion. A numerical experiment is carried out to demonstrate the performance of the proposed method.

  • PDF

Efficient Computation of Data Cubes Using MapReduce (맵리듀스를 사용한 데이터 큐브의 효율적인 계산 기법)

  • Lee, Ki Yong;Park, Sojeong;Park, Eunju;Park, Jinkyung;Choi, Yeunjung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.11
    • /
    • pp.479-486
    • /
    • 2014
  • MapReduce is a programing model used for parallelly processing a large amount of data. To analyze a large amount data, the data cube is widely used, which is an operator that computes group-bys for all possible combinations of given dimension attributes. When the number of dimension attributes is n, the data cube computes $2^n$ group-bys. In this paper, we propose an efficient method for computing data cubes using MapReduce. The proposed method partitions $2^n$ group-bys into $_nC_{{\lceil}n/2{\rceil}}$ batches, and computes those batches in stages using ${\lceil}n/2{\rceil}$ MapReduce jobs. Compared to the existing methods, the proposed method significantly reduces the amount of intermediate data generated by mappers, so that the cost of sorting and transferring those intermediate data is reduced significantly. Consequently, the total processing time for computing a data cube is reduced. Through experiments, we show the efficiency of the proposed method over the existing methods.

A Practical Method to Quantify Very Low Fluxes of Nitrous Oxide from a Rice Paddy (벼논에서 미량 아산화질소 플럭스의 정량을 위한 실용적 방법)

  • Okjung, Ju;Namgoo, Kang;Hoseup, Soh;Jung-Soo, Park
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.285-294
    • /
    • 2022
  • In order to accurately calculate greenhouse gas emissions in the agricultural field, Korea has been developing national-specific emission factors through direct measurement of gas fluxes using the closed-chamber method. In the rice paddy, only national-specific emission factors for methane (CH4) have been developed. It is thus necessary to develop those for nitrous oxide (N2O) affected by the application of nitrogen fertilizer. However, since the concentration of N2O emission from rice cultivation is very low, the QA/QC methods such as method detection and practical quantification limits are important. In this study, N2O emission from a rice paddy was evaluated affected by the amount of nitrogen fertilizer, by taking into account both method detection and practical quantification limits for N2O concentration. The N2O emission from a rice paddy soils affected by the nitrogen fertilizer application was estimated in the following order. The method detection limit (MDL) of N2O concentration was calculated at 95% confidence level based on the pooled standard deviation of concentration data sets using a standard gas with 98 nmol mol-1 N2O 10 times for 3 days. The practical quantification limit (PQL) of the N2O concentration is estimated by multiplying 10 to the pooled standard deviation. For the N2O flux data measured during the rice cultivation period in 2021, the MDL and PQL of N2O concentration were 18 nmol mol-1 and 87 nmol mol-1, respectively. The measured values above the PQL were merely about 12% of the total data. The cumulative N2O emission estimated based on the MDL and PQL was higher than the cumulative emission without nitrogen fertilizer application. This research would contribute to improving the reliability in quantification of the N2O flux data for accurate estimates of greenhouse gas emissions and uncertainties.

A Nearly Optimal One-to-Many Routing Algorithm on k-ary n-cube Networks

  • Choi, Dongmin;Chung, Ilyong
    • Smart Media Journal
    • /
    • v.7 no.2
    • /
    • pp.9-14
    • /
    • 2018
  • The k-ary n-cube $Q^k_n$ is widely used in the design and implementation of parallel and distributed processing architectures. It consists of $k^n$ identical nodes, each node having degree 2n is connected through bidirectional, point-to-point communication channels to different neighbors. On $Q^k_n$ we would like to transmit packets from a source node to 2n destination nodes simultaneously along paths on this network, the $i^{th}$ packet will be transmitted along the $i^{th}$ path, where $0{\leq}i{\leq}2n-1$. In order for all packets to arrive at a destination node quickly and securely, we present an $O(n^3)$ routing algorithm on $Q^k_n$ for generating a set of one-to-many node-disjoint and nearly shortest paths, where each path is either shortest or nearly shortest and the total length of these paths is nearly minimum since the path is mainly determined by employing the Hungarian method.

Fast HEVC Encoding based on CU-Depth First Decision (CU 깊이 우선 결정 기반의 HEVC 고속 부호화 방법)

  • Yoo, Sung-Eun;Ahn, Yong-Jo;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.40-50
    • /
    • 2012
  • In this paper we propose the fast CU (Coding Unit) mode decision method. To reduce computational complexity and save encoding time of HEVC, we divided CU, PU (Prediction Unit) and TU (Transform Unit) decision process into two stages. In the first stage, because $2N{\times}2N$ PU mode is mostly selected among $2N{\times}2N$, $N{\times}2N$, $2N{\times}N$, $N{\times}N$ PU modes, proposed algorithm uses only $2N{\times}2N$ PU mode deciding depth of each CU in the LCU (Largest CU). And then, proposed method decides exact PU and TU modes at the depth level which is decided in the first stage. In addition, early skip decision rule is applied to the proposed method to obtain more efficient computational complexity reduction. The proposed method reduces computational complexity of the HEVC encoder by simplifying a CU depth decision method. We could obtain about 50% computational complexity reduction in comparison with HM 3.3 HEVC reference software while bitrate compressed by the proposed algorithm increases only 2%.

A Step-by-Step Primality Test (단계적 소수 판별법)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.103-109
    • /
    • 2013
  • Miller-Rabin method is the most prevalently used primality test. However, this method mistakenly reports a Carmichael number or semi-prime number as prime (strong lier) although they are composite numbers. To eradicate this problem, it selects k number of m, whose value satisfies the following : m=[2,n-1], (m,n)=1. The Miller-Rabin method determines that a given number is prime, given that after the computation of $n-1=2^sd$, $0{\leq}r{\leq}s-1$, the outcome satisfies $m^d{\equiv}1$(mod n) or $m^{2^rd}{\equiv}-1$(mod n). This paper proposes a step-by-step primality testing algorithm that restricts m=2, hence achieving 98.8% probability. The proposed method, as a first step, rejects composite numbers that do not satisfy the equation, $n=6k{\pm}1$, $n_1{\neq}5$. Next, it determines prime by computing $2^{2^{s-1}d}{\equiv}{\beta}_{s-1}$(mod n) and $2^d{\equiv}{\beta}_0$(mod n). In the third step, it tests ${\beta}_r{\equiv}-1$ in the range of $1{\leq}r{\leq}s-2$ for ${\beta}_0$ > 1. In the case of ${\beta}_0$ = 1, it retests m=3,5,7,11,13,17 sequentially. When applied to n=[101,1000], the proposed algorithm determined 96.55% of prime in the initial stage. The remaining 3% was performed for ${\beta}_0$ >1 and 0.55% for ${\beta}_0$ = 1.