• Title/Summary/Keyword: Technology Use

Search Result 24,349, Processing Time 0.061 seconds

The Influence Evaluation of $^{201}Tl$ Myocardial Perfusion SPECT Image According to the Elapsed Time Difference after the Whole Body Bone Scan (전신 뼈 스캔 후 경과 시간 차이에 따른 $^{201}Tl$ 심근관류 SPECT 영상의 영향 평가)

  • Kim, Dong-Seok;Yoo, Hee-Jae;Ryu, Jae-Kwang;Yoo, Jae-Sook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.67-72
    • /
    • 2010
  • Purpose: In Asan Medical Center we perform myocardial perfusion SPECT to evaluate cardiac event risk level for non-cardiac surgery patients. In case of patients with cancer, we check tumor metastasis using whole body bone scan and whole body PET scan and then perform myocardial perfusion SPECT to reduce unnecessary exam. In case of short term in patients, we perform $^{201}Tl$ myocardial perfusion SPECT after whole body bone scan a minimum 16 hours in order to reduce hospitalization period but it is still the actual condition in which the evaluation about the affect of the crosstalk contamination due to the each other dissimilar isotope administration doesn't properly realize. So in our experiments, we try to evaluate crosstalk contamination influence on $^{201}Tl$ myocardial perfusion SPECT using anthropomorphic torso phantom and patient's data. Materials and Methods: From 2009 August to September, we analyzed 87 patients with $^{201}Tl$ myocardial perfusion SPECT. According to $^{201}Tl$ myocardial perfusion SPECT yesterday whole body bone scan possibility of carrying out, a patient was classified. The image data are obtained by using the dual energy window in $^{201}Tl$ myocardial perfusion SPECT. We analyzed $^{201}Tl$ and $^{99m}Tc$ counts ratio in each patients groups obtained image data. We utilized anthropomorphic torso phantom in our experiment and administrated $^{201}Tl$ 14.8 MBq (0.4 mCi) at myocardium and $^{99m}Tc$ 44.4 MBq (1.2 mCi) at extracardiac region. We obtained image by $^{201}Tl$ myocardial perfusion SPECT without gate method application and analyzed spatial resolution using Xeleris ver 2.0551. Results: In case of $^{201}Tl$ window and the counts rate comparison result yesterday whole body bone scan of being counted in $^{99m}Tc$ window, the difference in which a rate to 24 hours exponential-functionally notes in 1:0.114 with Ventri (GE Healthcare, Wisconsin, USA), 1:0.249 after the bone tracer injection in 12 hours in 1:0.411 with 1:0.79 with Infinia (GE healthcare, Wisconsin, USA) according to a reduction a time-out was shown (Ventri p=0.001, Infinia p=0.001). Moreover, the rate of the case in which it doesn't perform the whole body bone scan showed up as the average 1:$0.067{\pm}0.6$ of Ventri, and 1:$0.063{\pm}0.7$ of Infinia. According to the phantom after experiment spatial resolution measurement result, and an addition or no and time-out of $^{99m}Tc$ administrated, it doesn't note any change of FWHM (p=0.134). Conclusion: Through the experiments using anthropomorphic torso phantom and patients data, we found that $^{201}Tl$ myocardium perfusion SPECT image later carried out after the bone tracer injection with 16 hours this confirmed that it doesn't receive notable influence in spatial resolution by $^{99m}Tc$. But this investigation is only aimed to image quality, so it needs more investigation in patient's radiation dose and exam accuracy and precision. The exact guideline presentation about the exam interval should be made of the validation test which is exact and in which it is standardized about the affect of the crosstalk contamination according to the isotope use in which it is different later on.

  • PDF

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

The Effect of PET/CT Images on SUV with the Correction of CT Image by Using Contrast Media (PET/CT 영상에서 조영제를 이용한 CT 영상의 보정(Correction)에 따른 표준화섭취계수(SUV)의 영향)

  • Ahn, Sha-Ron;Park, Hoon-Hee;Park, Min-Soo;Lee, Seung-Jae;Oh, Shin-Hyun;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.77-81
    • /
    • 2009
  • Purpose: The PET of the PET/CT (Positron Emission Tomography/Computed Tomography) quantitatively shows the biological and chemical information of the body, but has limitation of presenting the clear anatomic structure. Thus combining the PET with CT, it is not only possible to offer the higher resolution but also effectively shorten the scanning time and reduce the noises by using CT data in attenuation correction. And because, at the CT scanning, the contrast media makes it easy to determine a exact range of the lesion and distinguish the normal organs, there is a certain increase in the use of it. However, in the case of using the contrast media, it affects semi-quantitative measures of the PET/CT images. In this study, therefore, we will be to establish the reliability of the SUV (Standardized Uptake Value) with CT data correction so that it can help more accurate diagnosis. Materials and Methods: In this experiment, a total of 30 people are targeted - age range: from 27 to 72, average age : 49.6 - and DSTe (General Electric Healthcare, Milwaukee, MI, USA) is used for equipment. $^{18}F$- FDG 370~555 MBq is injected into the subjects depending on their weight and, after about 60 minutes of their stable position, a whole-body scan is taken. The CT scan is set to 140 kV and 210 mA, and the injected amount of the contrast media is 2 cc per 1 kg of the patients' weight. With the raw data from the scan, we obtain a image showing the effect of the contrast media through the attenuation correction by both of the corrected and uncorrected CT data. Then we mark out ROI (Region of Interest) in each area to measure SUV and analyze the difference. Results: According to the analysis, the SUV is decreased in the liver and heart which have more bloodstream than the others, because of the contrast media correction. On the other hand, there is no difference in the lungs. Conclusions: Whereas the CT scan images with the contrast media from the PET/CT increase the contrast of the targeted region for the test so that it can improve efficiency of diagnosis, there occurred an increase of SUV, a semi-quantitative analytical method. In this research, we measure the variation of SUV through the correction of the influence of contrast media and compare the differences. As we revise the SUV which is increasing in the image with attenuation correction by using contrast media, we can expect anatomical images of high-resolution. Furthermore, it is considered that through this trusted semi-quantitative method, it will definitely enhance the diagnostic value.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF

The hydrodynamic characteristics of the canvas kite - 2. The characteristics of the triangular canvas kite - (캔버스 카이트의 유체역학적 특성에 관한 연구 - 2. 삼각형 캔버스 카이트의 특성 -)

  • Bae, Bong-Seong;Bae, Jae-Hyun;An, Heui-Chun;Lee, Ju-Hee;Shin, Jung-Wook
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.40 no.3
    • /
    • pp.206-213
    • /
    • 2004
  • As far as an opening device of fishing gears is concerned, applications of a kite are under development around the world. The typical examples are found in the opening device of the stow net on anchor and the buoyancy material of the trawl. While the stow net on anchor has proved its capability for the past 20 years, the trawl has not been wildly used since it has been first introduced for the commercial use only without sufficient studies and thus has revealed many drawbacks. Therefore, the fundamental hydrodynamics of the kite itself need to ne studied further. Models of plate and canvas kite were deployed in the circulating water tank for the mechanical test. For this situation lift and drag tests were performed considering a change in the shape of objects, which resulted in a different aspect ratio of rectangle and trapezoid. The results obtained from the above approaches are summarized as follows, where aspect ratio, attack angle, lift coefficient and maximum lift coefficient are denoted as A, B, $C_L$ and $C_{Lmax}$ respectively : 1. Given the triangular plate, $C_{Lmax}$ was produced as 1.26${\sim}$1.32 with A${\leq}$1 and 38$^{\circ}$B${\leq}$42$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$${\leq}$B${\leq}$50$^{\circ}$, $C_L$ was around 0.85. Given the inverted triangular plate, $C_{Lmax}$ was 1.46${\sim}$1.56 with A${\leq}$1 and 36$^{\circ}$B${\leq}$38$^{\circ}$. And When A${\geq}$1.5 and 22$^{\circ}$B${\leq}$26$^{\circ}$, $C_{Lmax}$ was 1.05${\sim}$1.21. Given the triangular kite, $C_{Lmax}$ was produced as 1.67${\sim}$1.77 with A${\leq}$1 and 46$^{\circ}$B${\leq}$48$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$B${\leq}$50$^{\circ}$, $C_L$ was around 1.10. Given the inverted triangular kite, $C_{Lmax}$ was 1.44${\sim}$1.68 with A${\leq}$1 and 28$^{\circ}$B${\leq}$32$^{\circ}$. And when A${\geq}$1.5 and 18$^{\circ}$B${\leq}$24$^{\circ}$, $C_{Lmax}$ was 1.03${\sim}$1.18. 2. For a model with A=1/2, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. Then there was a tendency of a very gradual decrease or no change in the value of $C_L$. For a model with A=2/3, the tendency of $C_L$ was similar to the case of a model with A=1/2. For a model with A=1, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. And the tendency of $C_L$ didn't change dramatically. For a model with A=1.5, the tendency of $C_L$ as a function of B was changed very small as 0.75${\sim}$1.22 with 20$^{\circ}$B${\leq}$50$^{\circ}$. For a model with A=2, the tendency of $C_L$ as a function of B was almost the same in the triangular model. There was no considerable change in the models with 20$^{\circ}$B${\leq}$50$^{\circ}$. 3. The inverted model's $C_L$ as a function of increase of B reached the maximum rapidly, then decreased gradually compared to the non-inverted models. Others were decreased dramatically. 4. The action point of dynamic pressure in accordance with the attack angle was close to the rear area of the model with small attack angle, and with large attack angle, the action point was close to the front part of the model. 5. There was camber vertex in the position in which the fluid pressure was generated, and the triangular canvas had large value of camber vertex when the aspect ratio was high, while the inverted triangular canvas was versa. 6. All canvas kite had larger camber ratio when the aspect ratio was high, and the triangular canvas had larger one when the attack angle was high, while the inverted triangluar canvas was versa.

A Study on the Fabrication of the Laminated Wood Composed of Poplar and Larch (포푸라와 일본잎갈나무의 집성재 제조에 관한 연구)

  • Jo, Jae-Myeong;Kang, Sun-Goo;Kim, Ki-Hyeon;Chung, Byeong-Jae
    • Journal of the Korean Wood Science and Technology
    • /
    • v.2 no.2
    • /
    • pp.25-31
    • /
    • 1974
  • 1. Various gluing qualities applying Resorcinol Plyophen #6000 were studied on aiming the strength relationships of laminated woods resulted by single species [poplar (Populus deltoides), larch(Larix leptolepis)], mixed species of (poplar and larch), preservatives, treated poplar the scarf joint with mixed species of poplar and larch and the scarf joint treated with preservatives. 1. 1 On the block shear and on the DVL tension test, the mean wood failure ratio showed an excellent value i.e., above 65% and the tangential strength for larch was higher than that of radial, but it was reversed for poplar as shown in Tables 1 and 2. 1. 2 The lamina treated with Na-PCP reduced slightly the strength but the limited strength allowed for manufacturing laminated wood was not influenced by treating Na-PCP as shown in Tables 3 and 4. 1. 3 The safe scarf ratio in the plane scarf joint was above 1/12 for larch and 1/6 for poplar regard less of the chemical treatment or untreatment as shown in Tables. 5, 6, 7 and 8. 2. In the normal and boiled state, the gluing quality of the laminated wood composed of single[poplar (Populus deltoides), larch (Larix leptolepis)] and double species (poplar and larch) glued with Resorcinol Plyophen #6000 were measured as follow, and also represented the delamination of the same laminated wood. 2.1 The normal block shear strength of the straight and curved laminated wood (in life size) were more than three times of the standards adhesion strength. And, the value of the boiled stock was decreased to one half of the standard shear adhesion strength, but it was more than twice the standard strength for the boiled stock. Thus, it was recognized that the water resistance of the Resorcinol Plyophen #6000 was very high as shown in Tables 9 and 10. 2. 2 The delamination ratio of the straight and curved laminated woods in respect of their composition were decraesed, in turn, in the following order i. e., larch, mixed stock (larch+poplar) and poplar. The maximum value represented by the larch was 3.5% but it was below the limited value as shown in Table 11. 3. The various strengthes i.e., compressive, bending and adhesion obtainted by the straight laminaced wood which were constructed by five plies of single and double species of lamina i. e., larch (Larix leptolepis) and poplar (Populus euramericana), glued with urea resin were shown as follows: 3. 1 If desired a higher strength of architectural laminated wood composed of poplar (P) and larch (L), the combination of the laminas should be arranged as follows, L+P+L+P+L as shown in Table 12. 3.2 The strength of laminated wood composed of laminas which included pith and knots was conside rably decreased than that of clear lamina as shown Table 13. 3.3 The shear strength of the FPL block of the straight laminated wood constructed by the same species which were glued with urea adhesives was more than twice the limited adhesion strength, thus it makes possible to use it for interior constructional stock.

  • PDF

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

Approach to the Extraction Method on Minerals of Ginseng Extract (추출조건(抽出條件)에 따른 인삼(人蔘)엑기스의 무기성분정량(無機成分定量)에 관(關)한 연구(硏究))

  • Cho, Han-Ok;Lee, Joong-Hwa;Cho, Sung-Hwan;Choi, Young-Hee
    • Korean Journal of Food Science and Technology
    • /
    • v.8 no.2
    • /
    • pp.95-106
    • /
    • 1976
  • In order to investigate chemical components and mineral of ginseng cultivated in Korea and to establish an appropriate extraction method, the present work was carried out with Raw ginseng(SC), White ginseng(SB) and Ginseng tail(SA). The results determined could be summarized as follows : 1. Among the proximate components, moisture content of SC, SB and SA were 66.37%, 12.61% and 12.20% respectively. The content of crude ash in SA was the highest value of three kinds of ginseng root: SA 6.04%, SB 3.52% and SC 1.56%. The crude protein of Dried ginseng root(SA and SB) was about 12-14%, which was more than two times compared with that of SC(6.30%) The content of pure protein seemed to be in similar tendency with that of crude protein in three kinds of ginseng root: 2.26% in SC, 5.94% in SB and 5.76% in SA. There was no significant difference in the content of fat among the kinds of ginseng root. $(1.1{\sim}2.5%)$ 2. The highest Ginseng extract was obtained by use of Continuous extractor which is a modified Soxhlet apparatus for 60 hours extraction with 60-80% ethanol. 3. Ginseng and the above-mentioned ginseng extract (Ginseng tail extract: SAE, White Ginseng extract : SBE, Raw Ginseng extract: SCE) were analyzed by volumetric method for the determination of Chlorine and Calcium, by colorimetric method for that of Iron and Phosphorus, by Atomic Absorption Spectrophotometer for that of Zinc, Copper and Manganese. The results were as follows : 1. The content of phosphorus in SA, SB and SC were 1.818%, 1.362%, 0.713% respectively and phosphorus content in three kinds of extract were in low level (SAE: 0.03%, SBE: 0.063%, SCE: 0.036%) 2. In the Calcium content, SA, SB and SC were 0.147%, 0.238%, 0.126% and the Calcium contents of Ginseng extracts were 0.023%, 0.011% and 0.016%. The extraction ratio of Calcium from SA was the highest value (15.6%), while that in the case of SB was 4.6%. 3. The Chlorine content of SA was 0.11%, this was slightly higher than others(SB: 0.07%, SC: 0.09%) and extraction ratio of SA and SB were 36.4%, 67.1% while that of SC was 84.4%. 4. The Iron content of SA, SB and SC were 125ppm, 32.5ppm and 20ppm but extraction ratio was extremely low (SAE: 1.33%, SBE: 0.83%, SCE: 1.08%), 5. The Manganese content of SA, SB and SC were 62.5ppm, 25.0ppm and 5.0ppm respectively but the Manganese content of extract could not determined, Copper content of SA, SB and SC were 15.0ppm, 20.0ppm and those of extract were 7.5ppm, 6.5ppm, 4.5ppm while those of extraction ratio were 50%, 32.5% and 90% respectively, Zinc was abundant in Ginseng compared with other herbs, (SA: 45.5ppm, SB: 27.5ppm and SC: 5.5ppm) and the extracted amount were 4.5ppm, 1.25ppm 1.50ppm respectively.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.