• Title/Summary/Keyword: R&D process

Search Result 3,656, Processing Time 0.035 seconds

Development of the Regulatory Impact Analysis Framework for the Convergence Industry: Case Study on Regulatory Issues by Emerging Industry (융합산업 규제영향분석 프레임워크 개발: 신산업 분야별 규제이슈 사례 연구)

  • Song, Hye-Lim;Seo, Bong-Goon;Cho, Sung-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.199-230
    • /
    • 2021
  • Innovative new products and services are being launched through the convergence between heterogeneous industries, and social interest and investment in convergence industries such as AI, big data-based future cars, and robots are continuously increasing. However, in the process of commercialization of convergence new products and services, there are many cases where they do not conform to the existing regulatory and legal system, which causes many difficulties in companies launching their products and services into the market. In response to these industrial changes, the current government is promoting the improvement of existing regulatory mechanisms applied to the relevant industry along with the expansion of investment in new industries. This study, in these convergence industry trends, aimed to analysis the existing regulatory system that is an obstacle to market entry of innovative new products and services in order to preemptively predict regulatory issues that will arise in emerging industries. In addition, it was intended to establish a regulatory impact analysis system to evaluate adequacy and prepare improvement measures. The flow of this study is divided into three parts. In the first part, previous studies on regulatory impact analysis and evaluation systems are investigated. This was used as basic data for the development direction of the regulatory impact framework, indicators and items. In the second regulatory impact analysis framework development part, indicators and items are developed based on the previously investigated data, and these are applied to each stage of the framework. In the last part, a case study was presented to solve the regulatory issues faced by actual companies by applying the developed regulatory impact analysis framework. The case study included the autonomous/electric vehicle industry and the Internet of Things (IoT) industry, because it is one of the emerging industries that the Korean government is most interested in recently, and is judged to be most relevant to the realization of an intelligent information society. Specifically, the regulatory impact analysis framework proposed in this study consists of a total of five steps. The first step is to identify the industrial size of the target products and services, related policies, and regulatory issues. In the second stage, regulatory issues are discovered through review of regulatory improvement items for each stage of commercialization (planning, production, commercialization). In the next step, factors related to regulatory compliance costs are derived and costs incurred for existing regulatory compliance are calculated. In the fourth stage, an alternative is prepared by gathering opinions of the relevant industry and experts in the field, and the necessity, validity, and adequacy of the alternative are reviewed. Finally, in the final stage, the adopted alternatives are formulated so that they can be applied to the legislation, and the alternatives are reviewed by legal experts. The implications of this study are summarized as follows. From a theoretical point of view, it is meaningful in that it clearly presents a series of procedures for regulatory impact analysis as a framework. Although previous studies mainly discussed the importance and necessity of regulatory impact analysis, this study presented a systematic framework in consideration of the various factors required for regulatory impact analysis suggested by prior studies. From a practical point of view, this study has significance in that it was applied to actual regulatory issues based on the regulatory impact analysis framework proposed above. The results of this study show that proposals related to regulatory issues were submitted to government departments and finally the current law was revised, suggesting that the framework proposed in this study can be an effective way to resolve regulatory issues. It is expected that the regulatory impact analysis framework proposed in this study will be a meaningful guideline for technology policy researchers and policy makers in the future.

Stand-alone Real-time Healthcare Monitoring Driven by Integration of Both Triboelectric and Electro-magnetic Effects (실시간 헬스케어 모니터링의 독립 구동을 위한 접촉대전 발전과 전자기 발전 원리의 융합)

  • Cho, Sumin;Joung, Yoonsu;Kim, Hyeonsu;Park, Minseok;Lee, Donghan;Kam, Dongik;Jang, Sunmin;Ra, Yoonsang;Cha, Kyoung Je;Kim, Hyung Woo;Seo, Kyoung Duck;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.1
    • /
    • pp.86-92
    • /
    • 2022
  • Recently, the bio-healthcare market is enlarging worldwide due to various reasons such as the COVID-19 pandemic. Among them, biometric measurement and analysis technology are expected to bring about future technological innovation and socio-economic ripple effect. Existing systems require a large-capacity battery to drive signal processing, wireless transmission part, and an operating system in the process. However, due to the limitation of the battery capacity, it causes a spatio-temporal limitation on the use of the device. This limitation can act as a cause for the disconnection of data required for the user's health care monitoring, so it is one of the major obstacles of the health care device. In this study, we report the concept of a standalone healthcare monitoring module, which is based on both triboelectric effects and electromagnetic effects, by converting biomechanical energy into suitable electric energy. The proposed system can be operated independently without an external power source. In particular, the wireless foot pressure measurement monitoring system, which is rationally designed triboelectric sensor (TES), can recognize the user's walking habits through foot pressure measurement. By applying the triboelectric effects to the contact-separation behavior that occurs during walking, an effective foot pressure sensor was made, the performance of the sensor was verified through an electrical output signal according to the pressure, and its dynamic behavior is measured through a signal processing circuit using a capacitor. In addition, the biomechanical energy dissipated during walking is harvested as electrical energy by using the electromagnetic induction effect to be used as a power source for wireless transmission and signal processing. Therefore, the proposed system has a great potential to reduce the inconvenience of charging caused by limited battery capacity and to overcome the problem of data disconnection.

Optimization and Application Research on Triboelectric Nanogenerator for Wind Energy Based High Voltage Generation (정전발전 기반 바람에너지 수확장치의 최적화 및 고전압 생성을 위한 활용 방안)

  • Jang, Sunmin;Ra, Yoonsang;Cho, Sumin;Kam, Dongik;Shin, Dongjin;Lee, Heegyu;Choi, Buhee;Lee, Sae Hyuk;Cha, Kyoung Je;Seo, Kyoung Duck;Kim, Hyung Woo;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.2
    • /
    • pp.243-248
    • /
    • 2022
  • As the scope of use of portable and wearable electronic devices is expanding, the limitations of heavy and bulky solid-state batteries are being revealed. Given that, it is urgent to develop a small energy harvesting device that can partially share the role of a battery and the utilization of energy sources that are thrown away in daily life is becoming more important. Contact electrification, which generates electricity based on the coupling of the triboelectric effect and electrical induction when the two material surfaces are in contact and separated, can effectively harvest the physical and mechanical energy sources existing in the surrounding environment without going through a complicated intermediate process. Recently, the interest in the harvest and utilization of wind energy is growing since the wind is an infinitely ecofriendly energy source among the various environmental energy sources that exist in human surroundings. In this study, the optimization of the energy harvesting device for the effective harvest of wind energy based on the contact electrification was analyzed and then, the utilization strategy to maximize the utilization of the generated electricity was investigated. Natural wind based Fluttering TENG (NF-TENG) using fluttering film was developed, and design optimization was conducted. Moreover, the safe high voltage generation system was developed and a plan for application in the field requiring high voltage was proposed by highlighting the unique characteristics of TENG that generates low current and high voltage. In this respect, the result of this study demonstrates that a portable energy harvesting device based on the contact electrification shows great potential as a strategy to harvest wind energy thrown away in daily life and use it widely in fields requiring high voltage.

Application of Environmental Friendly Bio-adsorbent based on a Plant Root for Copper Recovery Compared to the Synthetic Resin (구리 회수를 위한 식물뿌리 기반 친환경 바이오 흡착제의 적용 - 합성수지와의 비교)

  • Bawkar, Shilpa K.;Jha, Manis K.;Choubey, Pankaj K.;Parween, Rukshana;Panda, Rekha;Singh, Pramod K.;Lee, Jae-chun
    • Resources Recycling
    • /
    • v.31 no.4
    • /
    • pp.56-65
    • /
    • 2022
  • Copper is one of the non-ferrous metals used in the electrical/electronic manufacturing industries due to its superior properties particularly the high conductivity and less resistivity. The effluent generated from the surface finishing process of these industries contains higher copper content which gets discharged in to water bodies directly or indirectly. This causes severe environmental pollution and also results in loss of an important valuable metal. To overcome this issue, continuous R & D activities are going on across the globe in adsorption area with the purpose of finding an efficient, low cost and ecofriendly adsorbent. In view of the above, present investigation was made to compare the performance of a plant root (Datura root powder) as a bio-adsorbent to that of the synthetic one (Tulsion T-42) for copper adsorption from such effluent. Experiments were carried out in batch studies to optimize parameters such as adsorbent dose, contact time, pH, feed concentration, etc. Results of the batch experiments indicate that 0.2 g of Datura root powder and 0.1 g of Tulsion T-42 showed 95% copper adsorption from an initial feed/solution of 100 ppm Cu at pH 4 in contact time of 15 and 30 min, respectively. Adsorption data for both the adsorbents were fitted well to the Freundlich isotherm. Experimental results were also validated with the kinetic model, which showed that the adsorption of copper followed pseudo-second order rate expression for the both adsorbents. Overall result demonstrates that the bio-adsorbent tested has a potential applicability for metal recovery from the waste solutions/effluents of metal finishing units. In view of the requirements of commercial viability and minimal environmental damage there from, Datura root powder being an effective material for metal uptake, may prove to be a feasible adsorbent for copper recovery after the necessary scale-up studies.

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

Development of a Simultaneous Analytical Method for Azocyclotin, Cyhexatin, and Fenbutatin Oxide Detection in Livestock Products using the LC-MS/MS (LC-MS/MS를 이용한 축산물 중 유기주석계 농약 Azocyclotin, Cyhexatin 및 Fenbutatin oxide의 동시시험법 개발)

  • Nam Young Kim;Eun-Ji Park;So-Ra Park;Jung Mi Lee;Yong Hyun Jung;Hae Jung Yoon
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.5
    • /
    • pp.361-372
    • /
    • 2023
  • Organotin pesticide is used as an acaricide in agriculture and may contaminate livestock products. This study aims to develop a rapid and straightforward analytical method for detecting organotin pesticides, specifically azocyclotin, cyhexatin, and fenbutatin oxide, in various livestock products, including beef, pork, chicken, egg, and milk, using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The extraction process involved the use of 1% acetic acid in a mixture of acetonitrile and ethyl acetate (1:1). This was followed by the addition of anhydrous magnesium sulfate (MgSO4) and anhydrous sodium chloride. The extracts were subsequently purified using octadecyl (C18) and primary secondary amine (PSA), after which the supernatant was evaporated. Organotin pesticide recovery ranged from 75.7 to 115.3%, with a coefficient of variation (CV) below 25.3%. The results meet the criteria range of the Codex guidelines (CODEX CAC/GL 40). The analytical method in this study will be invaluable for the analysis of organotin pesticides in livestock products.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Development of a Model of Brain-based Evolutionary Scientific Teaching for Learning (뇌기반 진화적 과학 교수학습 모형의 개발)

  • Lim, Chae-Seong
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.8
    • /
    • pp.990-1010
    • /
    • 2009
  • To derive brain-based evolutionary educational principles, this study examined the studies on the structural and functional characteristics of human brain, the biological evolution occurring between- and within-organism, and the evolutionary attributes embedded in science itself and individual scientist's scientific activities. On the basis of the core characteristics of human brain and the framework of universal Darwinism or universal selectionism consisted of generation-test-retention (g-t-r) processes, a Model of Brain-based Evolutionary Scientific Teaching for Learning (BEST-L) was developed. The model consists of three components, three steps, and assessment part. The three components are the affective (A), behavioral (B), and cognitive (C) components. Each component consists of three steps of Diversifying $\rightarrow$ Emulating (Executing, Estimating, Evaluating) $\rightarrow$ Furthering (ABC-DEF). The model is 'brain-based' in the aspect of consecutive incorporation of the affective component which is based on limbic system of human brain associated with emotions, the behavioral component which is associated with the occipital lobes performing visual processing, temporal lobes performing functions of language generation and understanding, and parietal lobes, which receive and process sensory information and execute motor activities of the body, and the cognitive component which is based on the prefrontal lobes involved in thinking, planning, judging, and problem solving. On the other hand, the model is 'evolutionary' in the aspect of proceeding according to the processes of the diversifying step to generate variants in each component, the emulating step to test and select useful or valuable things among the variants, and the furthering step to extend or apply the selected things. For three components of ABC, to reflect the importance of emotional factors as a starting point in scientific activity as well as the dominant role of limbic system relative to cortex of brain, the model emphasizes the DARWIN (Driving Affective Realm for Whole Intellectual Network) approach.

The Ability of Anti-tumor Necrosis Factor Alpha(TNF-${\alpha}$) Antibodies Produced in Sheep Colostrums

  • Yun, Sung-Seob
    • 한국유가공학회:학술대회논문집
    • /
    • 2007.09a
    • /
    • pp.49-58
    • /
    • 2007
  • Inflammatory process leads to the well-known mucosal damage and therefore a further disturbance of the epithelial barrier function, resulting abnormal intestinal wall function, even further accelerating the inflammatory process[1]. Despite of the records, etiology and pathogenesis of IBD remain rather unclear. There are many studies over the past couple of years have led to great advanced in understanding the inflammatory bowel disease(IBD) and their underlying pathophysiologic mechanisms. From the current understanding, it is likely that chronic inflammation in IBD is due to aggressive cellular immune responses including increased serum concentrations of different cytokines. Therefore, targeted molecules can be specifically eliminated in their expression directly on the transcriptional level. Interesting therapeutic trials are expected against adhesion molecules and pro-inflammatory cytokines such as TNF-${\alpha}$. The future development of immune therapies in IBD therefore holds great promises for better treatment modalities of IBD but will also open important new insights into a further understanding of inflammation pathophysiology. Treatment of cytokine inhibitors such as Immunex(Enbrel) and J&J/Centocor(Remicade) which are mouse-derived monoclonal antibodies have been shown in several studies to modulate the symptoms of patients, however, theses TNF inhibitors also have an adverse effect immune-related problems and also are costly and must be administered by injection. Because of the eventual development of unwanted side effects, these two products are used in only a select patient population. The present study was performed to elucidate the ability of TNF-${\alpha}$ antibodies produced in sheep colostrums to neutralize TNF-${\alpha}$ action in a cell-based bioassay and in a small animal model of intestinal inflammation. In vitro study, inhibitory effect of anti-TNF-${\alpha}$ antibody from the sheep was determined by cell bioassay. The antibody from the sheep at 1 in 10,000 dilution was able to completely inhibit TNF-${\alpha}$ activity in the cell bioassay. The antibodies from the same sheep, but different milkings, exhibited some variability in inhibition of TNF-${\alpha}$ activity, but were all greater than the control sample. In vivo study, the degree of inflammation was severe to experiment, despite of the initial pilot trial, main trial 1 was unable to figure out of any effect of antibody to reduce the impact of PAF and LPS. Main rat trial 2 resulted no significant symptoms like characteristic acute diarrhea and weight loss of colitis. This study suggested that colostrums from sheep immunized against TNF-${\alpha}$ significantly inhibited TNF-${\alpha}$ bioactivity in the cell based assay. And the higher than anticipated variability in the two animal models precluded assessment of the ability of antibody to prevent TNF-${\alpha}$ induced intestinal damage in the intact animal. Further study will require to find out an alternative animal model, which is more acceptable to test anti-TNF-${\alpha}$ IgA therapy for reducing the impact of inflammation on gut dysfunction. And subsequent pre-clinical and clinical testing also need generation of more antibody as current supplies are low.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF