• Title/Summary/Keyword: Quantitative parameters

Search Result 1,130, Processing Time 0.027 seconds

Evaluation of Performance and No-reference-based Quality for CT Image with ADMIRE Iterative Reconstruction Parameters: A Pilot Study (ADMIRE 반복적 재구성 파라메터에 따른 CT 영상의 특성 및 무참조 기반 화질 평가: 선행연구)

  • Bo-Min Park;Yoo-Jin Seo;Seong-Hyeon Kang;Jina Shim;Hajin Kim;Sewon Lim;Youngjin Lee
    • Journal of radiological science and technology
    • /
    • v.47 no.3
    • /
    • pp.175-182
    • /
    • 2024
  • Advanced modeled iterative reconstruction (ADMIRE) represents a repetitive reconstruction method that can adjust strength and kernel, each of which are known to affect computed tomography (CT) image quality. The aim of this study was to quantitatively analyze the noise and spatial resolution of CT images according to ADMIRE control factors. Patient images were obtained by applying ADMIRE strength 2 and 3, and kernel B40 and B59. For quantitative evaluations, the noise level, spatial resolution, and overall image quality were measured using coefficient of variation (COV), edge rise distance (ERD), and natural image quality evaluation (NIQE). The superior values for the average COV, ERD, and NIQE results were obtained for the ADMIRE reconstruction conditions of ADMIRE 2 + B40, ADMIRE 3 + B59, and ADMIRE3 + B59. NIQE, which represents the overall image quality based on no-reference, was about 6.04 when using ADMIRE 3 + B59, showing the best result among the reconstructed image acquisition conditions. The results of this study indicate that the ADMIRE strength and kernel chosen for use in ADMIRE reconstruction have a significant impact on CT image quality. This highlights the importance of adjusting to the control factors in consideration of the clinical environment.

Nonlinear free and forced vibrations of oblique stiffened porous FG shallow shells embedded in a nonlinear elastic foundation

  • Kamran Foroutan;Liming Dai
    • Structural Engineering and Mechanics
    • /
    • v.89 no.1
    • /
    • pp.33-46
    • /
    • 2024
  • The present research delves into the analysis of nonlinear free and forced vibrations of porous functionally graded (FG) shallow shells reinforced with oblique stiffeners, which are embedded in a nonlinear elastic foundation (NEF) subjected to external excitation. Two distinct types of PFG shallow shells, characterized by even and uneven porosity distribution along the thickness direction, are considered in the research. In order to model the stiffeners, Lekhnitskii's smeared stiffeners technique is implemented. With the stress function and first-order shear deformation theory (FSDT), the nonlinear model of the oblique stiffened shallow shells is established. The strain-displacement relationships for the system are derived via the FSDT and utilization of the von-Kármán's geometric assumptions. To discretize the nonlinear governing equations, the Galerkin method is employed. The model such developed allows analysis of the effects of the stiffeners with various angles as desired, in addition to the quantitative investigation on the influence of the surrounding nonlinear elastic foundations. To numerically solve the problem of vibrations, the 4th-order P-T method is used, as this method, known for its enhanced accuracy and reliability, proves to be an effective choice. The validation of the present research findings includes a comprehensive comparison with outcomes documented in existing literature. Additionally, a comparative analysis of the numerical results against those obtained using the 4th Runge-Kutta method is performed. The impact of stiffeners with varying angles and material parameters on the vibration characteristics of the present system is also explored. The researchers and engineers working in this field may use the results of this study as benchmarks in their design and research for the considered shell systems.

Variations of Speed of Sound and Attenuation Coefficient with Porosity and Structure in Bone Mimics (뼈 모사체에서 다공율 및 구조에 대한 음속 및 감쇠계수의 변화)

  • Kim, Seong-Il;Choi, Min-Joo;Lee, Kang-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.388-394
    • /
    • 2010
  • In the present study, polyacetal bone mimics with circular cylindrical pores were used to investigate variations of speed of sound and attenuation coefficient with porosity and microarchitecture in bone. The speed of sound and attenuation coefficient of the 6 bone mimics with porosities from 0 % to 65.9 % were measured by a through-transmission method in water, using a pair of broadband, unfocused transducers with a diameter of 12.7 mm and a center frequency of 1.0 MHz. Independently of the structural properties of the bone mimics, the speed of sound decreased almost linearly with the increasing porosity. The attenuation coefficient measured at 1.0 MHz exhibited linear or nonlinear correlations with the porosity, depending on the structural properties of the bone mimics. These results are consistent with those previously published by other researchers using bone samples and mimics, and advances our understanding of the relationships of the ultrasonic parameters for the diagnosis of osteoporosis with the bone density and microarchitecture in human bones.

International case study comparing PSA modeling approaches for nuclear digital I&C - OECD/NEA task DIGMAP

  • Markus Porthin;Sung-Min Shin;Richard Quatrain;Tero Tyrvainen;Jiri Sedlak;Hans Brinkman;Christian Muller;Paolo Picca;Milan Jaros;Venkat Natarajan;Ewgenij Piljugin;Jeanne Demgne
    • Nuclear Engineering and Technology
    • /
    • v.55 no.12
    • /
    • pp.4367-4381
    • /
    • 2023
  • Nuclear power plants are increasingly being equipped with digital I&C systems. Although some probabilistic safety assessment (PSA) models for the digital I&C of nuclear power plants have been constructed, there is currently no specific internationally agreed guidance for their modeling. This paper presents an initiative by the OECD Nuclear Energy Agency called "Digital I&C PSA - Comparative application of DIGital I&C Modelling Approaches for PSA (DIGMAP)", which aimed to advance the field towards practical and defendable modeling principles. The task, carried out in 2017-2021, used a simplified description of a plant focusing on the digital I&C systems important to safety, for which the participating organizations independently developed their own PSA models. Through comparison of the PSA models, sensitivity analyses as well as observations throughout the whole activity, both qualitative and quantitative lessons were learned. These include insights on failure behavior of digital I&C systems, experience from models with different levels of abstraction, benefits from benchmarking as well as major contributors to the core damage frequency and those with minor effect. The study also highlighted the challenges with modeling of large common cause component groups and the difficulties associated with estimation of key software and common cause failure parameters.

Elemental Composition of the Soils using LIBS Laser Induced Breakdown Spectroscopy

  • Muhammad Aslam Khoso;Seher Saleem;Altaf H. Nizamani;Hussain Saleem;Abdul Majid Soomro;Waseem Ahmed Bhutto;Saifullah Jamali;Nek Muhammad Shaikh
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.200-206
    • /
    • 2024
  • Laser induced breakdown spectroscopy (LIBS) technique has been used for the elemental composition of the soils. In this technique, a high energy laser pulse is focused on a sample to produce plasma. From the spectroscopic analysis of such plasma plume, we have determined the different elements present in the soil. This technique is effective and rapid for the qualitative and quantitative analysis of all type of samples. In this work a Q-switched Nd: YAG laser operating with its fundamental mode (1064 nm laser wavelength), 5 nanosecond pulse width, and 10 Hz repetition rate was focused on soil samples using 10 cm quartz lens. The emission spectra of soil consist of Iron (Fe), Calcium (Ca), Titanium (Ti), Silicon (Si), Aluminum (Al), Magnesium (Mg), Manganese (Mn), Potassium (K), Nickel (Ni), Chromium (Cr), Copper (Cu), Mercury (Hg), Barium (Ba), Vanadium (V), Lead (Pb), Nitrogen (N), Scandium (Sc), Hydrogen (H), Strontium (Sr), and Lithium (Li) with different finger-prints of the transition lines. The maximum intensity of the transition lines was observed close to the surface of the sample and it was decreased along the axial direction of the plasma expansion due to the thermalization and the recombination process. We have also determined the plasma parameters such as electron temperature and the electron number density of the plasma using Boltzmann's plot method as well as the Stark broadening of the transition lines respectively. The electron temperature is estimated at 14611 °K, whereas the electron number density i.e. 4.1 × 1016 cm-3 lies close to the surface.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Quantitative Assessment of Myocardial Tissue Velocity in Normal Children with Doppler Tissue Imaging : Reference Values, Growth and Heart Rate Related Change (소아에서 도플러 조직영상을 이용한 최대 심근 속도의 계측 : 정상 추정치 및 성장 및 심박동수에 따른 변화)

  • Kim, Se Young;Hyun, Myung Chul;Lee, Sang Bum
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.8
    • /
    • pp.846-856
    • /
    • 2005
  • Purpose : To measure the peak myocardial tissue velocities and patterns of longitudinal motion of atrioventricular(AV) annuli and assess body weight and heart rates-related changes in normal children. Methods : Using pulsed wave Tissue Doppler Imaging(TDI), we measured peak systolic, early and late diastolic myocardial velocities in 72 normal children at six different sites in apical-4 chamber (A4C) view and at four different sites in apical-2 chamber(A2C) view and compared those values with each other, also observing effects with body weights and heart rates. Longitudinal motions of the AV annuli were measured at three different sites in A4C. Results : There were no significant differences of the TDI parameters between gender, ECHO-machines and among the three Doctors performing TDI. Peak myocardial velocities were significantly higher at the base of the heart than in the mid-ventricular region and in the right lateral ventricular wall than in the left lateral ventricular wall or IVS. The TDI parameters showed no significant correlation with fractional shortening(%). Peak systolic and early diastolic myocardial velocities had no correlation with heart rates, but peak late diastolic velocities and A/E ratio correlated positively with heart rates. Correlations between the TDI parameters and body weight were inconsistent. Absolute longitudinal displacement and % displacement were not differ between gender and not correlated with the TDI parameters. Conclusion : We measured the peak myocardial velocities with TDI and the longitudinal motion of the AV annuli using M-mode echocardiography in normal children. With more large scale evaluation, we may establish reference values in normal children and broaden clinical applicabilities in congenital and acquired heart diseases.

Gadoteridol's Signal Change according to TR, TE Parameters in T1 Image (T1영상에서 TR, TE 매개변수에 따른 Gadoteridol의 신호강도 변화)

  • Jeong, Hyun Keun;Jeong, Hyun Do;Nam, Ki Chang;Kim, Ho Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.117-124
    • /
    • 2015
  • In this paper, we introduce how to control TR, TE physical MR parameters for managing $H_1$ spin's SI(Signal Intensity) which is combined with gadolinium following administration MR agent in T1 effect for diagnostic usefulness. we used MRI phantom made with 0.5 mol Gadoteridol. This phantom was scanned by FSE sequence with different TR, TE parameters. In this study, to make T1 effect, TR was 200, 250, 300, 350, 400, 450, 500, 550, 600 msec. In addition to, TE was 6.2, 12.4, 18.6, 21.6 msec. The results were as follows ; Each RSP(Reaction Starting Point) was 100, 50, 40, 30 mmol in TE 6.2, 12.4, 18.6, 21.6 msec being irrelevant to TR. In MPSI(Max Peak Signal Intensity), 4 mmol was showed in TR 200 msec while peak signal was decreased to low concentration mol in TR 250-600 msec. In terms of RA(Reaction Area), the highest SI was TE 6.2 msec in TR 200-600msec. According to the study, we are able to recognize it is possible to control enhance rates by managing TR and TE of MR parameters; moreover, we expect that enhanced T1 image in MR clinical field can be performed in a practical way with this quantitative data.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF