• Title/Summary/Keyword: matrix methods

Search Result 2,900, Processing Time 0.035 seconds

Studies on Seed Production of Saddleback Clownfish, Amphiprion polymnus 1) Spawning, Egg Development and Larvae Culture (Saddleback clownfish, Amphiprion polymnus의 종묘생산에 관한 연구 1) 산란과 난 발생 및 자치어 사육)

  • Yoon, Young-Seock;Rho, Sum;Choi, Young-Ung;Kim, Jong-Su;Lee, Young-Don
    • Journal of Aquaculture
    • /
    • v.18 no.2
    • /
    • pp.107-114
    • /
    • 2005
  • Clownfish are important and very popular fish in the ornamental aquarium industry. Demand for the fish is increasing dramatically. The present study was conducted to verify methods of broodstock management, patterns of spawning, rates of egg hatching and estimates of larval growth fur the saddleback clownfish, Amphiprion polymnus. Spawning occurred 8 times between August 2002 to June 2004 with 2 females and 1 male participating. Fertilized eggs were separated by an adhesive matrix and were oval in shape. The eggs were $2.46{\pm}0.13mm$ in size as measured along the longest axis. The percentage of fertilized eggs was 96.7%. Hatching was observed seven days post-spawning and hatching rate was 85.5%. The sizes of the newly-hatched larvae were $4.58{\pm}0.21mm$ TL (total length). Larvae had an open mouth and anus, and an oval yolk sac. At the 1 st day after hatching, the sizes of the larvae were $4.90{\pm}0.35mm$ TL. The larvae began to eat rotifers after complete yolk absorption. On the 5th day post-hatch, larvae were $5.88{\pm}0.31mm$ TL with complete fins and the survival rate was 48.6%. At 8 days after hatching, a band began to appear on head and back of the larvae indicating the beginning of metamorphosis. Metamorphosis was completed at an average TL of $15.00{\pm}2.12mm$ on the 23rd day after hatching. By the 45th day after hatching, juveniles averaged $22.76{\pm}3.22mm$ TL and survival rate was 28.4%.

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

New Technologies for the Removal of Bacteriophages Contaminating Whey and Whey Products as Cheese by-Products: A Review (치즈 부산물인 유청과 유청 제품에 감염된 박테리오파지 제거를 위해 새롭게 개발된 기술: 총설)

  • Kim, Dong-Hyeon;Chon, Jung-Whan;Kim, Hyun-Sook;Kim, Hong-Seok;Song, Kwang-Young;Hwang, Dae-Geun;Yim, Jin-Hyuk;Kang, Il-Byung;Lee, Soo-Kyung;Seo, Kun-Ho
    • Journal of Dairy Science and Biotechnology
    • /
    • v.32 no.2
    • /
    • pp.93-100
    • /
    • 2014
  • In general, whey obtained from various cheese batches is being reused, so as to improve the texture and to increase the yield and the nutrient value of the various final milk-based products. In fact, re-usage of whey proteins, including whey cream, is a common and routine procedure. Unfortunately, most bacteriophages can survive heat treatments such as pasteurization. Hence, there is a high risk of an increase in the bacteriophage population during the cheese-making process. Whey samples contaminated with bacteriophages can cause serious problems in the cheese industry. In particular, the process of whey separation frequently leads to aerosol-borne bacteriophages and thus to a contaminated environment in the dairy production plant. In addition, whey proteins and whey cream reused in a cheese matrix can be infected by bacteriophages with thermal resistance. Therefore, to completely abolish the various risks of fermentation failure during re-usage of whey, a whey treatment that effectively decreases the bacteriophage population is urgently needed and indispensable. Hence, the purpose of this review is to introduce various newly developed methods and state-of-the-art technologies for removing bacteriophages from contaminated whey and whey products.

  • PDF

Development of Dose Planning System for Brachytherapy with High Dose Rate Using Ir-192 Source (고선량률 강내조사선원을 이용한 근접조사선량계획전산화 개발)

  • Choi Tae Jin;Yei Ji Won;Kim Jin Hee;Kim OK;Lee Ho Joon;Han Hyun Soo
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.283-293
    • /
    • 2002
  • Purpose : A PC based brachytherapy planning system was developed to display dose distributions on simulation images by 2D isodose curve including the dose profiles, dose-volume histogram and 30 dose distributions. Materials and Methods : Brachytherapy dose planning software was developed especially for the Ir-192 source, which had been developed by KAERI as a substitute for the Co-60 source. The dose computation was achieved by searching for a pre-computed dose matrix which was tabulated as a function of radial and axial distance from a source. In the computation process, the effects of the tissue scattering correction factor and anisotropic dose distributions were included. The computed dose distributions were displayed in 2D film image including the profile dose, 3D isodose curves with wire frame forms and dosevolume histogram. Results : The brachytherapy dose plan was initiated by obtaining source positions on the principal plane of the source axis. The dose distributions in tissue were computed on a $200\times200\;(mm^2)$ plane on which the source axis was located at the center of the plane. The point doses along the longitudinal axis of the source were $4.5\~9.0\%$ smaller than those on the radial axis of the plane, due to the anisotropy created by the cylindrical shape of the source. When compared to manual calculation, the point doses showed $1\~5\%$ discrepancies from the benchmarking plan. The 2D dose distributions of different planes were matched to the same administered isodose level in order to analyze the shape of the optimized dose level. The accumulated dose-volume histogram, displayed as a function of the percentage volume of administered minimum dose level, was used to guide the volume analysis. Conclusion : This study evaluated the developed computerized dose planning system of brachytherapy. The dose distribution was displayed on the coronal, sagittal and axial planes with the dose histogram. The accumulated DVH and 3D dose distributions provided by the developed system may be useful tools for dose analysis in comparison with orthogonal dose planning.

Determination of plasma C16-C24 globotriaosylceramide (Gb3) isoforms by tandem mass spectrometry for diagnosis of Fabry disease (패브리병(Fabry) 진단을 위한 혈장 중 Globotriaosylceramide (Gb3)의 탠덤매스 분석법 개발과 임상 응용)

  • Yoon, Hye-Ran;Cho, Kyung-Hee;Yoo, Han-Wook;Choi, Jin-Ho;Lee, Dong-Hwan;Zhang, Kate;Keutzer, Joan
    • Journal of Genetic Medicine
    • /
    • v.4 no.1
    • /
    • pp.45-52
    • /
    • 2007
  • Purpose : A simple, rapid, and highly sensitive analytical method for Gb3 in plasma was developed without labor-ex tensive pre-treatment by electrospray ionization MS/ MS (ESI-MS/MS). Measurement of globotriaosy lceramide (Gb3, ceramide trihex oside) in plasma has clinical importance for monitoring after enzyme replacement therapy in Fabry disease patients. The disease is an X-linked lipid storage disorder that results from a deficiency of the enzyme ${\alpha}$-galactosidase A (${\alpha}$-Gal A). The lack of ${\alpha}$-Gal A causes an intracellular accumulation of glycosphingolipids, mainly Gb3. Methods : Only simple 50-fold dilution of plasma is necessary for the extraction and isolation of Gb3 in plasma. Gb3 in diluted plasma was dissolved in dioxane containing C17:0 Gb3 as an internal standard. After centrifugation it was directly injected and analyzed through guard column by in combination with multiple reaction monitoring mode of ESI-MS/MS. Results : Eight isoforms of Gb3 were completely resolved from plasma matrix. C16:0 Gb3 occupied 50% of total Gb3 as a major component in plasma. Linear relationship for Gb3 isoforms w as found in the range of 0.001-1.0 ${\mu}g$/mL. The limit of detection (S/N=3) was 0.001 ${\mu}g$/mL and limit of quantification was 0.01 ${\mu}g$/mL for C16:0 Gb3 with acceptable precision and accuracy. Correlation coefficient of calibration curves for 8 Gb3 isoforms ranged from 0.9678 to 0.9982. Conclusion : This quantitative method developed could be useful for rapid and sensitive 1st line Fabry disease screening, monitoring and/or diagnostic tool for Fabry disease.

  • PDF

Study for Residue Analysis of Herbicide, Clopyralid in Foods (식품 중 제초제 클로피랄리드(Clopyralid)의 잔류 분석법)

  • Kim, Ji-young;Choi, Yoon Ju;Kim, Jong Su;Kim, Do Hoon;Do, Jung Ah;Jung, Yong Hyun;Lee, Kang Bong;Kim, Hyo Chin
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.4
    • /
    • pp.283-290
    • /
    • 2018
  • BACKGROUND: Pesticide residue analysis is an essential activity in order to establish the food safety of agricultural products. Analytical approaches to the food safety are required to meet internationally the guideline of Codex (Codex Alimentarius Commission, CAC/GL 40). In this study, we developed a liquid chromatograph-tandem mass spectrometer (LC-MS/MS) method to determine the herbicide clopyralid in food matrixes. METHODS AND RESULTS: Clopyralid was extracted with aqueous acetonitrile containing formic acid and the extracts were mixed in a citrate buffer consisted of magnesium sulfate anhydrous, NaCl, sodium citrate dihydrate and disodium hydrogencitrate sesquihydrate followed by centrifugation. The supernatants were filtered through a nylon membrane filter and used for the analysis of clopyralid. The method was validated by accuracy and precision experiments on the samples fortified at 3 different levels of clopyralid. LC-MS/MS in positive mode was employed to quantitatively determine clopyralid in the food samples. Matrix-matched calibration curves were inearranged from 0.001 to 0.25 mg/kg with r2 > 0.994. The limits of detection and quantification were determined to be 0.001 and 0.01 mg/kg, respectively. There covery values of clopyralid for tified at 0.01 mg/kg in the control samples ranged from approximately 82 to 106% with relative standard deviations below 2 0%. CONCLUSION: The method developed in this study meets successfully the Codex guideline for pesticide residue analysis in food samples. This, the method could be applicable to determine pesticides in foods produced domestically and internationally.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Strategy of Multistage Gamma Knife Radiosurgery for Large Lesions (큰 병변에 대한 다단계 감마나이프 방사선수술의 전략)

  • Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.5
    • /
    • pp.801-809
    • /
    • 2019
  • Existing Gamma Knife Radiosurgery(GKRS) for large lesions is often conducted in stages with volume or dose partitions. Often in case of volume division the target used to be divided into sub-volumes which are irradiated under the determined prescription dose in multi-sessions separated by a day or two, 3~6 months. For the entire course of treatment, treatment informations of the previous stages needs to be reflected to subsequent sessions on the newly mounted stereotactic frame through coordinate transformation between sessions. However, it is practically difficult to implement the previous dose distributions with existing Gamma Knife system except in the same stereotactic space. The treatment area is expanding because it is possible to perform the multistage treatment using the latest Gamma Knife Platform(GKP). The purpose of this study is to introduce the image-coregistration based on the stereotactic spaces and the strategy of multistage GKRS such as the determination of prescription dose at each stage using new GKP. Usually in image-coregistration either surgically-embedded fiducials or internal anatomical landmarks are used to determine the transformation relationship. Author compared the accuracy of coordinate transformation between multi-sessions using four or six anatomical landmarks as an example using internal anatomical landmarks. Transformation matrix between two stereotactic spaces was determined using PseudoInverse or Singular Value Decomposition to minimize the discrepancy between measured and calculated coordinates. To evaluate the transformation accuracy, the difference between measured and transformed coordinates, i.e., ${\Delta}r$, was calculated using 10 landmarks. Four or six points among 10 landmarks were used to determine the coordinate transformation, and the rest were used to evaluate the approaching method. Each of the values of ${\Delta}r$ in two approaching methods ranged from 0.6 mm to 2.4 mm, from 0.17 mm to 0.57 mm. In addition, a method of determining the prescription dose to give the same effect as the treatment of the total lesion once in case of lesion splitting was suggested. The strategy of multistage treatment in the same stereotactic space is to design the treatment for the whole lesion first, and the whole treatment design shots are divided into shots of each stage treatment to construct shots of each stage and determine the appropriate prescription dose at each stage. In conclusion, author confirmed the accuracy of prescribing dose determination as a multistage treatment strategy and found that using as many internal landmarks as possible than using small landmarks to determine coordinate transformation between multi-sessions yielded better results. In the future, the proposed multistage treatment strategy will be a great contributor to the frameless fractionated treatment of several Gamma Knife Centers.

Comparative Study on the Carbon Stock Changes Measurement Methodologies of Perennial Woody Crops-focusing on Overseas Cases (다년생 목본작물의 탄소축적 변화량 산정방법론 비교 연구-해외사례를 중심으로)

  • Hae-In Lee;Yong-Ju Lee;Kyeong-Hak Lee;Chang-Bae Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.258-266
    • /
    • 2023
  • This study analyzed methodologies for estimating carbon stocks of perennial woody crops and the research cases in overseas countries. As a result, we found that Australia, Bulgaria, Canada, and Japan are using the stock-difference method, while Austria, Denmark, and Germany are estimating the change in the carbon stock based on the gain-loss method. In some overseas countries, the researches were conducted on estimating the carbon stock change using image data as tier 3 phase beyond the research developing country-specific factors as tier 2 phase. In South Korea, convergence studies as the third stage were conducted in forestry field, but advanced research in the agricultural field is at the beginning stage. Based on these results, we suggest directions for the following four future researches: 1) securing national-specific factors related to emissions and removals in the agricultural field through the development of allometric equation and carbon conversion factors for perennial woody crops to improve the completeness of emission and removals statistics, 2) implementing policy studies on the cultivation area calculation refinement with fruit tree-biomass-based maturity, 3) developing a more advanced estimation technique for perennial woody crops in the agricultural sector using allometric equation and remote sensing techniques based on the agricultural and forestry satellite scheduled to be launched in 2025, and to establish a matrix and monitoring system for perennial woody crop cultivation areas in the agricultural sector, Lastly, 4) estimating soil carbon stocks change, which is currently estimated by treating all agricultural areas as one, by sub-land classification to implement a dynamic carbon cycle model. This study suggests a detailed guideline and advanced methods of carbon stock change calculation for perennial woody crops, which supports 2050 Carbon Neutral Strategy of Ministry of Agriculture, Food, and Rural Affairs and activate related research in agricultural sector.

[ $Gd(DTPA)^{2-}$ ]-enhanced, and Quantitative MR Imaging in Articular Cartilage (관절연골의 $Gd(DTPA)^{2-}$-조영증강 및 정량적 자기공명영상에 대한 실험적 연구)

  • Eun Choong-Ki;Lee Yeong-Joon;Park Auh-Whan;Park Yeong-Mi;Bae Jae-Ik;Ryu Ji Hwa;Baik Dae-Il;Jung Soo-Jin;Lee Seon-Joo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.100-108
    • /
    • 2004
  • Purpose : Early degeneration of articular cartilage is accompanied by a loss of glycosaminoglycan (GAG) and the consequent change of the integrity. The purpose of this study was to biochemically quantify the loss of GAG, and to evaluate the $Gd(DTPA)^{2-}$-enhanced, and T1, T2, rho relaxation map for detection of the early degeneration of cartilage. Materials and Methods : A cartilage-bone block in size of $8mm\;\times\;10mm$ was acquired from the patella in each of three pigs. Quantitative analysis of GAG of cartilage was performed at spectrophotometry by use of dimethylmethylene blue. Each of cartilage blocks was cultured in one of three different media: two different culture media (0.2 mg/ml trypsin solution, 1mM Gd $(DTPA)^{2-}$ mixed trypsin solution) and the control media (phosphate buffered saline (PBS)). The cartilage blocks were cultured for 5 hrs, during which MR images of the blocks were obtained at one hour interval (0 hr, 1 hr, 2 hr, 3 hr, 4 hr, 5 hr). And then, additional culture was done for 24 hrs and 48 hrs. Both T1-weighted image (TR/TE, 450/22 ms), and mixed-echo sequence (TR/TE, 760/21-168ms; 8 echoes) were obtained at all times using field of view 50 mm, slice thickness 2 mm, and matrix $256\times512$. The MRI data were analyzed with pixel-by-pixel comparisons. The cultured cartilage-bone blocks were microscopically observed using hematoxylin & eosin, toluidine blue, alcian blue, and trichrome stains. Results : At quantitation analysis, GAG concentration in the culture solutions was proportional to the culture durations. The T1-signal of the cartilage-bone block cultured in the $Gd(DTPA)^{2-}$ mixed solution was significantly higher ($42\%$ in average, p<0.05) than that of the cartilage-bone block cultured in the trypsin solution alone. The T1, T2, rho relaxation times of cultured tissue were not significantly correlated with culture duration (p>0.05). However the focal increase in T1 relaxation time at superficial and transitional layers of cartilage was seen in $Gd(DTPA)^{2-}$ mixed culture. Toluidine blue and alcian blue stains revealed multiple defects in whole thickness of the cartilage cultured in trypsin media. Conclusion : The quantitative analysis showed gradual loss of GAG proportional to the culture duration. Microimagings of cartilage with $Gd(DTPA)^{2-}$-enhancement, relaxation maps were available by pixel size of $97.9\times195\;{\mu}m$. Loss of GAG over time better demonstrated with $Gd(DTPA)^{2-}$-enhanced images than with T1, T2, rho relaxation maps. Therefore $Gd(DTPA)^{2-}$-enhanced T1-weighted image is superior for detection of early degeneration of cartilage.

  • PDF