• 제목/요약/키워드: Process-parameter

Search Result 3,080, Processing Time 0.039 seconds

Comparison of Lambertian Model on Multi-Channel Algorithm for Estimating Land Surface Temperature Based on Remote Sensing Imagery

  • A Sediyo Adi Nugraha;Muhammad Kamal;Sigit Heru Murti;Wirastuti Widyatmanti
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.397-418
    • /
    • 2024
  • The Land Surface Temperature (LST) is a crucial parameter in identifying drought. It is essential to identify how LST can increase its accuracy, particularly in mountainous and hill areas. Increasing the LST accuracy can be achieved by applying early data processing in the correction phase, specifically in the context of topographic correction on the Lambertian model. Empirical evidence has demonstrated that this particular stage effectively enhances the process of identifying objects, especially within areas that lack direct illumination. Therefore, this research aims to examine the application of the Lambertian model in estimating LST using the Multi-Channel Method (MCM) across various physiographic regions. Lambertian model is a method that utilizes Lambertian reflectance and specifically addresses the radiance value obtained from Sun-Canopy-Sensor(SCS) and Cosine Correction measurements. Applying topographical adjustment to the LST outcome results in a notable augmentation in the dispersion of LST values. Nevertheless, the area physiography is also significant as the plains terrain tends to have an extreme LST value of ≥ 350 K. In mountainous and hilly terrains, the LST value often falls within the range of 310-325 K. The absence of topographic correction in LST results in varying values: 22 K for the plains area, 12-21 K for hilly and mountainous terrain, and 7-9 K for both plains and mountainous terrains. Furthermore, validation results indicate that employing the Lambertian model with SCS and Cosine Correction methods yields superior outcomes compared to processing without the Lambertian model, particularly in hilly and mountainous terrain. Conversely, in plain areas, the Lambertian model's application proves suboptimal. Additionally, the relationship between physiography and LST derived using the Lambertian model shows a high average R2 value of 0.99. The lowest errors(K) and root mean square error values, approximately ±2 K and 0.54, respectively, were achieved using the Lambertian model with the SCS method. Based on the findings, this research concluded that the Lambertian model could increase LST values. These corrected values are often higher than the LST values obtained without the Lambertian model.

Applying QFD in the Development of Sensible Brassiere for Middle Aged Women (QFD(품질 기능 전개도)를 이용한 중년 여성의 감성 Brassiere 개발)

  • Kim Jeong-hwa;Hong Kyung-hi;Scheurell Diane M.
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.12 s.138
    • /
    • pp.1596-1604
    • /
    • 2004
  • Quality Function Deployment(QFD) is a product development tool which ensures that the voice of the customer needs is heard and translated into products. To develop a sensible brassiere for middle-aged women QFD was adopted. In this study the applicability and usefulness of QFD was examined through the engineering design process for a sensible brassiere for middle-aged women. The customer needs for the wear comfort of brassiere was made by one-on-one survey of 100 women who aged 30-40. The customer competitive assessment was generated by wearing tests of 10 commercial brassieres. The subjective assessment was conducted in the enviornmental chamber that was controlled at $28{\pm}1^{\circ}C,\;65{\pm}3\%RH.$ As a results, we developed twenty-one customer needs and corresponding HOWs for the wear comfort of brassiere. The Customer Competitive Assessment was generated by wearing tests of commercial brassiere. The subjective measurement scale and dimension for the evaluation of sensible brassiere were extracted from factor analysis. Four factors were fitting, aesthetic property, pressure sensation, displacement of brassiere due to movement. The most critical design parameter was wire-related property and second one was stretchability of main material of brassiere. Also, wearing comfort of brassiere was affected by the interaction of initial stretchability of wing and support of strap. Engineering design process, QFD was applicable to the development of technical and aesthetic brassieres.

Study on the Methodology of the Microbial Risk Assessment in Food (식품중 미생물 위해성평가 방법론 연구)

  • 이효민;최시내;윤은경;한지연;김창민;김길생
    • Journal of Food Hygiene and Safety
    • /
    • v.14 no.4
    • /
    • pp.319-326
    • /
    • 1999
  • Recently, it is continuously rising to concern about the health risk being induced by microorganisms in food such as Escherichia coli O157:H7 and Listeria monocytogenes. Various organizations and regulatory agencies including U.S.FPA, U.S.DA and FAO/WHO are preparing the methodology building to apply microbial quantitative risk assessment to risk-based food safety program. Microbial risks are primarily the result of single exposure and its health impacts are immediate and serious. Therefore, the methodology of risk assessment differs from that of chemical risk assessment. Microbial quantitative risk assessment consists of tow steps; hazard identification, exposure assessment, dose-response assessment and risk characterization. Hazard identification is accomplished by observing and defining the types of adverse health effects in humans associated with exposure to foodborne agents. Epidemiological evidence which links the various disease with the particular exposure route is an important component of this identification. Exposure assessment includes the quantification of microbial exposure regarding the dynamics of microbial growth in food processing, transport, packaging and specific time-temperature conditions at various points from animal production to consumption. Dose-response assessment is the process characterizing dose-response correlation between microbial exposure and disease incidence. Unlike chemical carcinogens, the dose-response assessment for microbial pathogens has not focused on animal models for extrapolation to humans. Risk characterization links the exposure assessment and dose-response assessment and involve uncertainty analysis. The methodology of microbial dose-response assessment is classified as nonthreshold and thresh-old approach. The nonthreshold model have assumption that one organism is capable of producing an infection if it arrives at an appropriate site and organism have independence. Recently, the Exponential, Beta-poission, Gompertz, and Gamma-weibull models are using as nonthreshold model. The Log-normal and Log-logistic models are using as threshold model. The threshold has the assumption that a toxicant is produce by interaction of organisms. In this study, it was reviewed detailed process including risk value using model parameter and microbial exposure dose. Also this study suggested model application methodology in field of exposure assessment using assumed food microbial data(NaCl, water activity, temperature, pH, etc.) and the commercially used Food MicroModel. We recognized that human volunteer data to the healthy man are preferred rather than epidemiological data fur obtaining exact dose-response data. But, the foreign agencies are studying the characterization of correlation between human and animal. For the comparison of differences to the population sensitivity: it must be executed domestic study such as the establishment of dose-response data to the Korean volunteer by each microbial and microbial exposure assessment in food.

  • PDF

Comparison of Effectiveness about Image Quality and Scan Time According to Reconstruction Method in Bone SPECT (영상 재구성 방법에 따른 Bone SPECT 영상의 질과 검사시간에 대한 실효성 비교)

  • Kim, Woo-Hyun;Jung, Woo-Young;Lee, Ju-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.9-14
    • /
    • 2009
  • Purpose: Nowadays in the nuclear medicine, many studies and efforts are being made to reduce the scan time, as well as the waiting time to be needed to execute exams after injection of radionuclide medicines. Several methods are being used in clinic, such as developing new radionuclide compounds that enable to be absorbed into target organs more quickly and reducing acquisition scan time by increase the number of Gamma Camera detectors to examine. Each medical equipment manufacturer has improved the imaging process techniques to reduce scan time. In this paper, we tried to analyze the difference of image quality between FBP, 3D OSEM reconstruction methods that commercialized and being clinically applied, and Astonish reconstruction method (A kind of Iterative fast reconstruction method of Philips), also difference of image quality on scan time. Material and Methods: We investigated in 32 patients that examined the Bone SPECT from June to July 2008 at department of nuclear medicine, ASAN Medical Center in Seoul. 40sec/frame and 20sec/frame images were acquired that using Philips‘ PRECEDENCE 16 Gamma Camera and then reconstructed those images by using the Astonish (Philips’ Reconstruction Method), 3D OSEM and FBP methods. The blinded test was performed to the clinical interpreting physicians with all images analyzed by each reconstruction method for qualitative analysis. And we analyzed target to non target ratio by draws lesions as the center of disease for quantitative analysis. At this time, each image was analyzed with same location and size of ROI. Results: In a qualitative analysis, there was no significant difference by acquisition time changes in image quality. In a quantitative analysis, the images reconstructed Astonish method showed good quality due to better sharpness and distinguish sharply between lesions and peripheral lesions. After measuring each mean value and standard deviation value of target to non target ratio with 40 sec/frame and 20sec/frame images, those values are Astonish (40 sec-$13.91{\pm}5.62$ : 20 sec-$13.88{\pm}5.92$), 3D OSEM (40 sec-$10.60{\pm}3.55$ : 20 sec-$10.55{\pm}3.64$), FBP (40 sec-$8.30{\pm}4.44$ : 20 sec-$8.19{\pm}4.20$). We analyzed target to non target ratio from 20 sec and 40 sec images. And we analyzed the result, In Astonish (t=0.16, p=0.872), 3D OSEM (t=0.51, p=0.610), FBP (t=0.73, p=0.469) methods, there was no significant difference statistically by acquisition time change in image quality. But FBP indicates no statistical differences while some images indicate difference between 40 sec/frame and 20 sec/frame images by various factors. Conclusions: In the circumstance, try to find a solution to reduce nuclear medicine scan time, the development of nuclear medicine equipment hardware has decreased while software has marched forward at a relentless. Due to development of computer hardware, the image reconstruction time was reduced and the expanded capacity to restore enables iterative methods that couldn't be performed before due to technical limits. As imaging process technique developed, it reduced scan time and we could observe that image quality keep similar level. While keeping exam quality and reducing scan time can induce the reduction of patient's pain and sensory waiting time, also accessibility of nuclear medicine exam will be improved and it provide better service to patients and clinical physician who order exams. Consequently, those things make the image of department of nuclear medicine be improved. Concurrent Imaging - A new function that setting up each image acquisition parameter and enables to acquire images simultaneously with various parameters to once examine.

  • PDF

Virus Inactivation during the Manufacture of a Collagen Type I from Bovine Hides (소 가죽 유래 Type I Collagen 생산 공정에서 바이러스 불활화)

  • Bae, Jung Eun;Kim, Chan Kyung;Kim, Sungpo;Yang, Eun Kyung;Kim, In Seop
    • Korean Journal of Microbiology
    • /
    • v.48 no.4
    • /
    • pp.314-318
    • /
    • 2012
  • Most types of collagen used for biomedical applications, such as cell therapy and tissue engineering, are derived from animal tissues. Therefore, special precautions must be taken during the production of these proteins in order to assure against the possibility of the products transmitting infectious diseases to the recipients. The ability to remove and/or inactivate known and potential viral contaminants during the manufacturing process is an ever-increasingly important parameter in assessing the safety of biomedical products. The purpose of this study was to evaluate the efficacies of the 70% ethanol treatment and pepsin treatment at pH 2.0 for the inactivation of bovine viruses during the manufacture of collagen type I from bovine hides. A variety of experimental model viruses for bovine viruses including bovine herpes virus (BHV), bovine viral diarrhea virus (BVDV), bovine parainfluenza 3 virus (BPIV-3), and bovine parvovirus (BPV), were chosen for the evaluation of viral inactivation efficacy. BHV, BVDV, BPIV-3, and BPV were effectively inactivated to undetectable levels within 1 h of 70% ethanol treatment for 24 h, with log reduction factors of ${\geq}5.58$, ${\geq}5.32$, ${\geq}5.11$, and ${\geq}3.42$, respectively. BHV, BVDV, BPIV-3, and BPV were also effectively inactivated to undetectable levels within 5 days of pepsin treatment for 14 days, with the log reduction factors of ${\geq}7.08$, ${\geq}6.60$, ${\geq}5.60$, and ${\geq}3.59$, respectively. The cumulative virus reduction factors of BHV, BVDV, BPIV-3, and BPV were ${\geq}12.66$, ${\geq}11.92$, ${\geq}10.71$, and ${\geq}7.01$. These results indicate that the production process for collagen type I from bovine hides has a sufficient virus-reducing capacity to achieve a high margin of virus safety.

Structural change and electrical conductivity according to Sr content in Cu-doped LSM (La1-xSrxMn0.8Cu0.2O3) (Sr 함량이 Cu-doped LSM(La1-xSrxMn0.8Cu0.2O3)의 구조적변화와 전기전도도에 미치는 영향)

  • Ryu, Ji-Seung;Noh, Tai-Min;Kim, Jin-Seong;Lee, Hee-Soo
    • Journal of the Korean Crystal Growth and Crystal Technology
    • /
    • v.22 no.2
    • /
    • pp.78-83
    • /
    • 2012
  • The structural change and the electrical conductivity with Sr content in $La_{1-x}Sr_xMn_{0.8}Cu_{0.2}O_3$ (LSMCu) were studied. $La_{0.8}Sr_{0.2}MnO_3$ (LSM) and $La_{1-x}Sr_xMn_{0.8}Cu_{0.2}O_3$ ($0.1{\leq}x{\leq}0.4$) were synthesized by EDTA citric complexing process (ECCP). A decrease in the lattice parameters and lattice volumes was observed with increase of Sr content, and these results were attributed to the increasing $Mn^{4+}$ ions and $Cu^{3+}$ ions in B-site. The electrical conductivity measured from $500^{\circ}C$ to $1000^{\circ}C$ was increased with increase of Sr content in the $0.1{\leq}x{\leq}0.3$ composition range, and it was 172.6 S/cm (at $750^{\circ}C$) and 177.7 S/cm (at $950^{\circ}C$, the maximum value) in x = 0.3. The electrical conductivity was decreased in x = 0.4 because of the presence of the second phase in the grain boundaries. The lattice volume was contracted by increase of $Mn^{4+}$ ions and $Cu^{3+}$ ions in B-site according to increase of Sr content and the electrical conductivity was increased with increase of charge carriers which were involved in the hopping mechanism.

Convolution-Superposition Based IMRT Plan Study for the PTV Containing the Air Region: A Prostate Cancer Case (Convolution-Superposition 알고리즘을 이용한 치료계획시스템에서 공기가 포함된 표적체적에 대한 IMRT 플랜: 전립선 케이스)

  • Kang, Sei-Kwon;Yoon, Jai-Woong;Park, Soah;Hwang, Taejin;Cheong, Kwang-Ho;Han, Taejin;Kim, Haeyoung;Lee, Me-Yeon;Kim, Kyoung Ju;Bae, Hoonsik
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.271-277
    • /
    • 2013
  • In prostate IMRT planning, the planning target volume (PTV), extended from a clinical target volume (CTV), often contains an overlap air volume from the rectum, which poses a problem inoptimization and prescription. This study was aimed to establish a planning method for such a case. There can be three options in which volume should be considered the target during optimization process; PTV including the air volume of air density ('airOpt'), PTV including the air volume of density value one, mimicking the tissue material ('density1Opt'), and PTV excluding the air volume ('noAirOpt'). Using 10 MV photon beams, seven field IMRT plans for each target were created with the same parameter condition. For these three cases, DVHs for the PTV, bladder and the rectum were compared. Also, the dose coverage for the CTV and the shifted CTV were evaluated in which the shifted CTV was a copied and translated virtual CTV toward the rectum inside the PTV, thus occupying the initial position of the overlap air volume, simulating the worst condition for the dose coverage in the target. Among the three options, only density1Opt plan gave clinically acceptable result in terms of target coverage and maximum dose. The airOpt plan gave exceedingly higher dose and excessive dose coverage for the target volume whereas noAirOpt plan gave underdose for the shifted CTV. Therefore, for prostate IMRT plan, having an air region in the PTV, density modification of the included air to the value of one, is suggested, prior to optimization and prescription for the PTV. This idea can be equally applied to any cases including the head and neck cancer with the PTV having the overlapped air region. Further study is being under process.

A Study on the Effects of Young Entrepreneur Competency on Startup Performance: Focusing on the Mediating Effect of Network Activities (청년창업가의 역량이 창업성과에 미치는 영향 요인에 관한 연구: 네트워크활동의 매개효과 중심으로)

  • Hyun Chae Song;Chul-Moo Heo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.2
    • /
    • pp.141-157
    • /
    • 2024
  • This study analyzes the effect of enterepreneurial competencies on start-up performance through network activities for young entrepreneurs. Enterepreneurial competencies are composed of opportunity recognition competencies, marketing competencies, technical competencies, and creative competencies. A total of 354 questionnaires collected from young entrepreneurs residing in the country were used for empirical analysis. SPSS v28.0 and PROCESS macro v4.3 were analyzed based on the research model of a single-parameter single-mediated model. As a result of the analysis, first, it was found that among the enterepreneurial competencies, opportunity recognition competencies, marketing competencies, technical competencies, and creative competencies have a positive (+) significant effect on network activities. Among them, it was found that marketing competence has the greatest effect on network activities and technical competence has the least effect. Second, network activities were found to have a significant effect on start-up performance in a positive (+) direction. Third, among enterepreneurial competencies, opportunity recognition competence, marketing competence, technical competence, and creative competence were found to have a positive (+) effect on start-up performance. Among them, it was found that creative competence had the greatest effect and technical competence had the smallest effect. Fourth, network activities were found to mediate between enterepreneurial competencies and start-up performance. As for the relative effect size of the indirect effects of independent variables, it was found that marketing competence had the greatest effect on start-up performance and technology competence had the smallest effect. The academic implications of this study include investigating the significance and relationship of various variables, providing verification of theoretical frameworks related to entrepreneurship, identifying the main drivers of start-up success, and suggesting the importance of the network between enterepreneurial competencies and start-up performance. In addition, the practical implications of this study suggest the importance of marketing competencies for networking, and suggest differentiation of competencies. It emphasizes the strategic role of creative competence and provides guidance to policymakers for supporting start-ups on customized policies for fostering valuable start-ups.

  • PDF

Characteristics Evaluation of Combustion by Analysis of Fuel Gas Using Refuse-derived Fuel by Mixing Different Ratios with Organic and Combustible Wastes (배연가스 분석에 의한 가연성과 유기성폐기물을 혼합한 고형화연료 연소 특성평가)

  • Ha, Sang-An
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.17 no.3
    • /
    • pp.27-39
    • /
    • 2009
  • The main objective of this study is to investigate the characteristics of combustion by analyzing fuel gases from a combustion equipment with various combustion conditions for refuse-derived fuels (RDFs). CO gas is a parameter for indicating of incomplete combustion during a combustion process. The lowest CO gas was produced when the experiment conditions were m=2 under air-fuel condition and $800^{\circ}C$. $CO_2$ gas is a final product after complete combustions. The highest amount of $CO_2$ gas was produced when the experiment conditions were m=2 under air-fuel condition and $800^{\circ}C$. The highest level of $SO_2$ gas was produced in S.1 sample containing the highest sulfur. The highest level of NOx gas was produced in S.1 sample with the highest nitrogen content and air-fuel condition of m=2 under temperature of $800^{\circ}C$. HCl gas that is generated by reacting with metals catalyst through oxygen catalyst reaction during combustion process is a precursor of dioxin formation. The higher level of HCl gas was produced in the sample with higher chlorine content. The lowest level of HCl gas was produced when the experiment conditions were air-fuel condition of m=2 and $800^{\circ}C$. The lowest level of $NH_3$ gas was generated when the experiment condition was m=2 under air-fuel condition and after 3 minutes. Air-fuel condition is more important to create $NH_3$ gas than operating temperatures. Higher level of $H_2S$ gas was generated in S.1 sample with the higher sulfur content and was created in RDFs that contain higher mixture ratios of sewage sludge and food wastes. A result of combustion, gases and gases levels from the combustion of S.1 and S.2 were very similar to the combustion of a stone coal. As results of this research, when evaluating the feasibility of the RDFs, the RDFs could be used as auxiliary and main fuels.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF