• Title/Summary/Keyword: Product-array approach

Search Result 27, Processing Time 0.034 seconds

The Optimal Parameter Decision of$\beta$ carotene Mass Production Using Taguchi Method (다구찌 방법을 이용한 $\beta$-carotene 대량생산의 최적환경 조건 결정)

  • 조용욱;박명규
    • Journal of the Korea Safety Management & Science
    • /
    • v.2 no.3
    • /
    • pp.27-36
    • /
    • 2000
  • The Robust Design method uses a mathematical tool called orthogonal arrays to study a large number of decision variables with a small number of experiments. It also uses a new measure of quality, called signal-to-noise (S/N) ratio, to predict the quality from the customer's perspective. Thus, the most economical product and process design from both manufacturing and customers' viewpoints can be accomplished at the smallest, affordable development cost. Many companies, big and small, high-tech and low-tech, have found the Robust Design method valuable in making high-quality products available to customers at a low competitive price while still maintaining an acceptable profit margin. A study to analyze and solve problems of a biochemical process experiment has presented in this paper. We have taken Taguchi's parameter design approach, specifically orthogonal array, and determined the optimal levels of the selected variables through analysis of the experimental results using S/N ratio.

  • PDF

Life Assessment of Automotive Electronic Part using Virtual Qualification (Virtual Qualification을 통한 자동차용 전장부품의 수명 평가)

  • Lee, Hae-Jin;Lee, Jung-Youn;Oh, Jae-Eung
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.143-146
    • /
    • 2005
  • In modern automotive control modules, mechanical failures of surface mounted electronic components such as microprocessors, crystals, capacitors, transformers, inductors, and ball grid array packages, etc., are mai or roadblocks to design cycle time and product reliability. This paper presents a general methodology of failure analysis and fatigue prediction of these electronic components under automotive vibration environments. Mechanical performance of these packages is studied through finite element modeling approach fur given vibration environments in automotive application. Using the results of vibration simulation, fatigue lift is predicted based on cumulative damage analysis and material durability information. Detailed model of solder/lead joints is built to correlate the system level model and obtain solder strains/stresses. The primary focus in this paper is on surface-mount interconnect fatigue failures and the critical component selected for this analysis is 80 pin plastic leaded microprocessor.

  • PDF

Permitted Daily Exposure for Diisopropyl Ether as a Residual Solvent in Pharmaceuticals

  • Romanelli, Luca;Evandri, Maria Grazia
    • Toxicological Research
    • /
    • v.34 no.2
    • /
    • pp.111-125
    • /
    • 2018
  • Solvents can be used in the manufacture of medicinal products provided their residual levels in the final product comply with the acceptable limits based on safety data. At worldwide level, these limits are set by the "Guideline Q3C (R6) on impurities: guideline for residual solvents" issued by the ICH. Diisopropyl ether (DIPE) is a widely used solvent but the possibility of using it in the pharmaceutical manufacture is uncertain because the ICH Q3C guideline includes it in the group of solvents for which "no adequate toxicological data on which to base a Permitted Daily Exposure (PDE) was found". We performed a risk assessment of DIPE based on available toxicological data, after carefully assessing their reliability using the Klimisch score approach. We found sufficiently reliable studies investigating subchronic, developmental, neurological toxicity and carcinogenicity in rats and genotoxicity in vitro. Recent studies also investigated a wide array of toxic effects of gasoline/DIPE mixtures as compared to gasoline alone, thus allowing identifying the effects of DIPE itself. These data allowed a comprehensive toxicological evaluation of DIPE. The main target organs of DIPE toxicity were liver and kidney. DIPE was not teratogen and had no genotoxic effects, either in vitro or in vivo. However, it appeared to increase the number of malignant tumors in rats. Therefore, DIPE could be considered as a non-genotoxic animal carcinogen and a PDE of 0.98 mg/day was calculated based on the lowest No Observed Effect Level (NOEL) value of $356mg/m^3$ (corresponding to 49 mg/kg/day) for maternal toxicity in developmental rat toxicity study. In a worst-case scenario, using an exceedingly high daily dose of 10 g/day, allowed DIPE concentration in pharmaceutical substances would be 98 ppm, which is in the range of concentration limits for ICH Q3C guideline class 2 solvents. This result might be considered for regulatory decisions.

A Study on Warfighting Experimentation for Organizing Operational Troops (작전부대의 인원편성 최적화를 위한 워게임 전투실험 방법에 대한 연구)

  • Lee, Yong-Bin;Yum, Bong-Jin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.3
    • /
    • pp.423-431
    • /
    • 2011
  • Warfighting experimentation is an important process for identifying requirements against changing military environment and for verifying proposed measures for reforming military service. The wargame simulation experiment is regarded as one of the most effective means to warfighting experimentation, and its importance is increasing than ever. On the other hand, the results of wargame experiments could be unreliable due to the uncertainty involved in the experimental procedure. To improve the reliability of the experimental results, systematic experimental procedures and analysis methods must be employed, and the design and analysis of experiments technique can be used effectively for this purpose. In this paper, AWAM, a wargame simulator, is used to optimize the organization of operational troops. The simulation model describes a warfighting situation in which the 'survival rate of our force' and the 'survival rate of the enemy force' are considered as responses, 'the numbers of weapons in the squad' as control factors, and 'the uncontrollable variables of the battlefield' as noise factors. In addition, for the purpose of effective experimentation, the product array approach in which the inner and outer orthogonal arrays are crossed is adopted. Then, the signal-to-noise-ratio for each response and the desirabilities for the means and standard deviations of responses are calculated and used to determine a compromise optimal solution. The experimental procedures and analysis methods developed in this paper can provide guidelines for designing and analyzing wargame simulation experiments for similar warfighting situations.

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Characterization of compounds and quantitative analysis of oleuropein in commercial olive leaf extracts (상업용 올리브 잎 추출물의 화합물 특성과 이들의 oleuropein 함량 비교분석)

  • Park, Mi Hyeon;Kim, Doo-Young;Arbianto, Alfan Danny;Kim, Jung-Hee;Lee, Seong Mi;Ryu, Hyung Won;Oh, Sei-Ryang
    • Journal of Applied Biological Chemistry
    • /
    • v.64 no.2
    • /
    • pp.113-119
    • /
    • 2021
  • Olive (Olea europaea L.) leaves, a raw material for health functional foods and cosmetics have abundant polyphenols including oleuropein (major bioactive compound) with various biological activities: antioxidant, antibacterial, antiviral, anticancer activity, and inhibit platelet activation. Oleuropein has been reported as skin protectant, antioxidant, anti-ageing, anti-cancer, anti-inflammation, anti-atherogenic, anti-viral, and anti-microbial activity. Despite oleuropein is the important compound in olive leaves, there is still no quantitative approach to reveal oleuropein content in commercial products. Therefore, a validated method of analysis has to develop for oleuropein. In this study, the components and oleuropein content in 10 types of products were analyzed using a developed method with ultra-performance liquid chromatography to quadrupole time-of-flight mass spectrometry, charge of aerosol detector, and photodiode array. The total of 18 compounds including iridoids (1, 3, 4, 14, and 16-18), coumarin (2), phenylethanoids (5, 9, and 11), flavonoids (6-8, 10, 12, and 13), lignan (15), were tentatively identified in the leaves extract based high resolution mass spectrometry data, and the content of oleuropein in each product was almost identical between two detection methods. The oleuropein in three commercial product (A, G, H) was contained more over the suggested content, and it of five products (B, E, H, I, J) were analyzed within 5-10% error range. However, the two products (C, D) were found far lower than suggested contents. This study provides that analytical results of oleuropein could be a potential information for the quality control of leaf extract for a manufactured functional food.