• Title/Summary/Keyword: Integration method

Search Result 3,685, Processing Time 0.031 seconds

A Study on Lyricism Expression of Color & Realistic Expression reflected in Oriental Painting of flower & birds (전통화조화의 사실적(寫實的) 표현과 시정적(詩情的) 색채표현)

  • Ha, Yeon-Su
    • Journal of Science of Art and Design
    • /
    • v.10
    • /
    • pp.183-218
    • /
    • 2006
  • Colors change in time corresponding with the value system and aesthetic consciousness of the time. The roles that colors play in painting can be divided into the formative role based on the contrast and harmony of color planes and the aesthetic role expressed by colors to represent the objects. The aesthetic consciousness of the orient starts with the Civility(禮) and Pleasure(樂), which is closely related with restrained or tempered human feelings. In the art world of the orient including poem, painting, and music, what are seen and felt from the objects are not represented in all. Added by the sentiment laid background, the beauty of the orient emphasizes the beauty of restraint and temperance, which has long been the essential aesthetic emotion of the orient. From the very inception of oriental painting, colors had become a symbolic system in which the five colors associated with the philosophy of Yin and Yang and Five Forces were symbolically connected with the four sacred animals of Red Peacock, Black Turtle, Blue Dragon, and White Tiger. In this color system the use of colors was not free from ideological matters, and was further constrained by the limited color production and distribution. Therefore, development in color expression seemed to have been very much limited because of the unavailability and unreadiness of various colors. Studies into the flow in oriental painting show that color expression in oriental painting have changed from symbolic color expression to poetic expression, and then to emotional color expression as the mode of painting changes in time. As oriental painting transformed from the art of religious or ceremonial purpose to one of appreciation, the mast visible change in color expression is the one of realism(simulation). Rooted on the naturalistic color expression of the orient where the fundamental properties of objects were considered mast critical, this realistic color expression depicts the genuine color properties that the objects posses, with many examples in the Flower & Bird Painting prior to the North Sung dynasty. This realistic expression of colors changed as poetic sentiments were fused with painting in later years of the North Sung dynasty, in which a conversion to light ink and light coloring in the use of ink and colors was witnessed, and subjective emotion was intervened and represented. This mode of color expression had established as free and creative coloring with vivid expression of individuality. The fusion of coloring and lyricism was borrowed from the trend in painting after the North Sung dynasty which was mentioned earlier, and from the trend in which painting was fused with poetic sentiments to express the emotion of artists, accompanied with such features as light coloring and compositional change. Here, the lyricism refers to the artist's subjective perspective of the world and expression of it in refined words with certain rhythm, the essence of which is the integration of the artist's ego and the world. The poetic ego projects the emotion and sentiment toward the external objects or assimilates them in order to express the emotion and sentiment of one's own ego in depth and most efficiently. This is closely related with the rationale behind the long-standing tradition of continuous representation of same objects in oriental painting from ancient times to contemporary days. According to the thoughts of the orient, nature was not just an object of expression, but recognized as a personified body, to which the artist projects his or her emotions. The result is the rebirth of meaning in painting, completely different from what the same objects previously represented. This process helps achieve the integration and unity between the objects and the ego. Therefore, this paper discussed the lyrical expression of colors in the works of the author, drawing upon the poetic expression method reflected in the traditional Flower and Bird Painting, one of the painting modes mainly depending on color expression. Based on the related discussion and analysis, it was possible to identify the deep thoughts and the distinctive expression methods of the orient and to address the significance to prioritize the issue of transmission and development of these precious traditions, which will constitute the main identity of the author's future work.

  • PDF

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

A Study on the construction of physical security system by using security design (보안디자인을 활용한 시설보안시스템 구축 방안)

  • Choi, Sun-Tae
    • Korean Security Journal
    • /
    • no.27
    • /
    • pp.129-159
    • /
    • 2011
  • Physical security has always been an extremely important facet within the security arena. A comprehensive security plan consists of three components of physical security, personal security and information security. These elements are interrelated and may exist in varying degrees defending on the type of enterprise or facility being protected. The physical security component of a comprehensive security program is usually composed of policies and procedures, personal, barriers, equipment and records. Human beings kept restless struggle to preserve their and tribal lives. However, humans in prehistoric ages did not learn how to build strong house and how to fortify their residence, so they relied on their protection to the nature and use caves as protection and refuge in cold days. Through the history of man, human has been establishing various protection methods to protect himself and his tribe's life and assets. Physical security methods are set in the base of these security methods. Those caves that primitive men resided was rounded with rock wall except entrance, so safety was guaranteed especially by protection for tribes in all directions. The Great Wall of China that is considered as the longest building in the history was built over one hundred years from about B.C. 400 to prevent the invasion of northern tribes, but this wall enhanced its protection function to small invasions only, and Mongolian army captured the most part of China across this wall by about 1200 A.D. European lords in the Middle Ages built a moat by digging around of castle or reinforced around of the castle by making bascule bridge, and provided these protections to the resident and received agricultural products cultivated. Edwin Holmes of USA in 20 centuries started to provide innovative electric alarm service to the development of the security industry in USA. This is the first of today's electrical security system, and with developments, the security system that combined various electrical security system to the relevant facilities takes charging most parts of today's security market. Like above, humankind established various protection methods to keep life in the beginning and its development continues. Today, modern people installed CCTV to the most facilities all over the country to cope with various social pathological phenomenon and to protect life and assets, so daily life of people are protected and observed. Most of these physical security systems are installed to guarantee our safety but we pay all expenses for these also. Therefore, establishing effective physical security system is very important and urgent problem. On this study, it is suggested methods of establishing effective physical security system by using system integration on the principle of security design about effective security system's effective establishing method of physical security system that is increasing rapidly by needs of modern society.

  • PDF

A Study of Test-Retest Reliability and Interrater Reliability of the Sensory Processing Scale for Children (SPS-C) (아동감각처리척도(Sensory Processing Scale for Children; SPS-C)의 검사-재검사 신뢰도와 검사자간 신뢰도 연구)

  • Kim, Kyeong-Mi;Kim, Ga-Yeon;Lee, Seung-Jin
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.20 no.2
    • /
    • pp.11-21
    • /
    • 2022
  • Objective : This study examined the test-retest reliability and interrater reliability of the Sensory Processing Scale for Children (SPS-C). Method : Senventy primary caregivers of children with sensory processing difficulties and 3 years old participated in the study. The subjects were recruited through child development centers, welfare centers, and acquaintances located in Seoul, Gyeonggi-do, Busan, and Gyeongsang-do. The test-retest reliability verification targeted 20 main caregivers of children with difficulty in sensory processing. Re-evaluation was performed within 7 to 14 days after the initial evaluation, and Pearson's correlation coefficient was used to confirm the relevance between the two time points, and the Intraclass correlation coefficient was used to confirm the degree of agreement. The interrater reliability verification was conducted with 18 primary caregivers and 18 subsidiary caregivers of children with sensory processing difficulties. Each caregiver evaluated the same child, and the Intraclass correlation coefficient was used to confirm the agreement between the two sets of caregivers. Results : The test-retest reliability was Pearson's correlation coefficient r=.914 and intraclass correlation coefficient ICC=.939, indicating a high level of relevance and agreement. The interrater reliability was an Intraclass correlation coefficient ICC=.727, which showed a moderate level of agreement, but the tactile area (ICC=.455) and proprioceptive area (ICC=.439) were not statistically significant and showed a low degree of agreement. Conclusion : Through this study, it was confirmed that the children's Sensory Processing Scale for Children (SPS-C) is a stable evaluation tool with test-retest reliability and interrater reliability verified, and it will be able to provide help in standardization studies for future clinical use.

The Change in Participation Patterns in Play Activities of Children with Autism Spectrum Disorder during COVID-19: A Scoping Review (COVID-19로 인한 자폐스펙트럼 장애아동의 놀이 활동 참여 변화: 주제범위 문헌고찰)

  • Kim, Hyang-Won;Song, Ye-Ji;Kang, Seong-Hyeon;Won, Ha-Eun;Jeong, Yun-Wha
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.21 no.1
    • /
    • pp.59-73
    • /
    • 2023
  • Objective : To examine changes in participation patterns of children with Autism Spectrum Disorder (ASD) in play activities during COVID-19 by reviewing relevant literature. Methods : This scoping review was conducted via five steps. we created a research question and searched for relevant literature published in English through CINAHL, PubMed, ERIC, MEDLINE, Google Scholar and Google search engine. After selecting the literature based on inclusion criteria, data were charted based on 10 items (i.e., author name, journal name, publication year, nation, authors' majors, research method, participant' age and gender as well as quantitative and qualitative results of study). The results were analyzed using descriptive numerical and thematic analyses. Results : After reviewing 437 articles and 152 websites, six articles were included. Theses articles were conducted by experts from various fields and countries. Five themes were highlighted in selected articles: COVID-19 resulted in (1) decreased time of outdoor play, (2) increased play time on screen, (3) increased time spent with family, (4) increased sensory difficulties, and (5) recommendations for services for children with disabilities and during COVID-19. Conclusion : This study suggests telerehabilitation programs about parental behavior strategies in order to solve difficulties in which children with ASD may experience when participating in play activities during disasters. Study results can be used as fundamental evidence to emphasize importance of play activities and to systematize role of occupational therapists and service guidelines for supporting play activities of children with disabilities in disasters.

Comparative analysis of auto-calibration methods using QUAL2Kw and assessment on the water quality management alternatives for Sum River (QUAL2Kw 모형을 이용한 자동보정 방법 비교분석과 섬강의 수질관리 대안 평가)

  • Cho, Jae Heon
    • Journal of Environmental Impact Assessment
    • /
    • v.25 no.5
    • /
    • pp.345-356
    • /
    • 2016
  • In this study, auto-calibration method for water quality model was compared and analyzed using QUAL2Kw, which can estimate the optimum parameters through the integration of genetic algorithm and QUAL2K. The QUAL2Kw was applied to the Sum River which is greatly affected by the pollution loads of Wonju city. Two auto-calibration methods were examined: single parameter application for the whole river reach and separate parameter application for each reach of multiple reaches. The analysis about CV(RMSE) and fitness of the GA show that the separate parameter auto-calibration method is better than the single parameter method in the degree of precision. Thus the separate parameter auto-calibration method is applied to the water quality modelling of this study. The calibrated QUAL2Kw was used for the three scenarios for the water quality management of the Sum River, and the water quality impact on the river was analyzed. In scenario 1, which improve the effluent water quality of Wonju WWTP, BOD and TP concentrations of the Sum River 4-1 station which is representative one of Mid-Watershed, are decreased 17.7% and 29.1%, respectively. And immediately after joining the Wonjucheon, BOD and TP concentrations are decreased 50.4% and 40.5%, respectively. In scenario 2, Wonju water supply intake is closed and multi-regional water supply, which come from other watershed except the Sum River, is provided. The Sum River water quality in scenario 2 is slightly improved as the flow of the river is increased. Immediately after joining the Wonjucheon, BOD and TP concentrations are decreased 0.18mg/L and 0.0063mg/L, respectively. In scenario 3, the water quality management alternatives of scenario 1 and 2 are planned simultaneously, the Sum River water quality is slightly more improved than scenario 1. Water quality prediction of the three scenarios indicates that effluent water quality improvement of Wonju WWTP is the most efficient alternative in water quality management of the Sum River. Particularly the Sum River water quality immediately after joining the Wonjucheon is greatly improved. When Wonju water supply intake is closed and multi-regional water supply is provided, the Sum River water quality is slightly improved.

An integrated Method of New Casuistry and Specified Principlism as Nursing Ethics Methodology (새로운 간호윤리학 방법론;통합된 사례방법론)

  • Um, Young-Rhan
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.3 no.1
    • /
    • pp.51-64
    • /
    • 1997
  • The purpose of the study was to introduce an integrated approach of new Casuistry and specified principlism in resolving ethical problems and studying nursing ethics. In studying clinical ethics and nursing ethics, there is no systematic research method. While nurses often experience ethical dilemmas in practice, much of previous research on nursing ethics has focused merely on describing the existing problems. In addition, ethists presented theoretical analysis and critics rather than providing the specific problems solving strategies. There is a need in clinical situations for an integrated method which can provide the objective description for existing problem situations as well as specific problem solving methods. We inherit two distinct ways of discussing ethical issues. One of these frames these issues in terms of principles, rules, and other general ideas; the other focuses on the specific features of particular kinds of moral cases. In the first way general ethical rules relate to specific moral cases in a theoretical manner, with universal rules serving as "axioms" from which particular moral judgments are deduced as theorems. In the seconds, this relation is frankly practical. with general moral rules serving as "maxims", which can be fully understood only in terms of the paradigmatic cases that define their meaning and force. Theoretical arguments are structured in ways that free them from any dependence on the circumstances of their presentation and ensure them a validity of a kind that is not affected by the practical context of use. In formal arguments particular conclusions are deduced from("entailed by") the initial axioms or universal principles that are the apex of the argument. So the truth or certainty that attaches to those axioms flows downward to the specific instances to be "proved". In the language of formal logic, the axioms are major premises, the facts that specify the present instance are minor premises, and the conclusion to be "proved" is deduced (follows necessarily) from the initial presises. Practical arguments, by contrast, involve a wider range of factors than formal deductions and are read with an eye to their occasion of use. Instead of aiming at strict entailments, they draw on the outcomes of previous experience, carrying over the procedures used to resolve earlier problems and reapply them in new problmatic situations. Practical arguments depend for their power on how closely the present circumstances resemble those of the earlier precedent cases for which this particular type of argument was originally devised. So. in practical arguments, the truths and certitudes established in the precedent cases pass sideways, so as to provide "resolutions" of later problems. In the language of rational analysis, the facts of the present case define the gounds on which any resolution must be based; the general considerations that carried wight in similar situations provide warrants that help settle future cases. So the resolution of any problem holds good presumptively; its strengh depends on the similarities between the present case and the prededents; and its soundness can be challenged (or rebutted) in situations that are recognized ans exceptional. Jonsen & Toulmin (1988), and Jonsen (1991) introduce New Casuistry as a practical method. The oxford English Dictionary defines casuistry quite accurately as "that part of ethics which resolves cases of conscience, applying the general rules of religion and morality to particular instances in which circumstances alter cases or in which there appears to be a conflict of duties." They modified the casuistry of the medieval ages to use in clinical situations which is characterized by "the typology of cases and the analogy as an inference method". A case is the unit of analysis. The structure of case was made with interaction of situation and moral rules. The situation is what surrounds or stands around. The moral rule is the essence of case. The analogy can be objective because "the grounds, the warrants, the theoretical backing, the modal qualifiers" are identified in the cases. The specified principlism was the method that Degrazia (1992) integrated the principlism and the specification introduced by Richardson (1990). In this method, the principle is specified by adding information about limitations of the scope and restricting the range of the principle. This should be substantive qualifications. The integrated method is an combination of the New Casuistry and the specified principlism. For example, the study was "Ethical problems experienced by nurses in the care of terminally ill patients"(Um, 1994). A semi-structured in-depth interview was conducted for fifteen nurses who mainly took care of terminally ill patients. The first stage, twenty one cases were identified as relevant to the topic, and then were classified to four types of problems. For instance, one of these types was the patient's refusal of care. The second stage, the ethical problems in the case were defined, and then the case was analyzed. This was to analyze the reasons, the ethical values, and the related ethical principles in the cases. Then the interpretation was synthetically done by integration of the result of analysis and the situation. The third stage was the ordering phase of the cases, which was done according to the result of the interpretation and the common principles in the cases. The first two stages describe the methodology of new casuistry, and the final stage was for the methodology of the specified principlism. The common principles were the principle of autonomy and the principle of caring. The principle of autonomy was specified; when competent patients refused care, nurse should discontinue the care to respect for the patients' decision. The principle of caring was also specified; when the competent patients refused care, nurses should continue to provide the care in spite of the patients' refusal to preserve their life. These specification may lead the opposite behavior, which emphasizes the importance of nurse's will and intentions to make their decision in the clinical situations.

  • PDF

Simultaneous Pesticide Analysis Method for Bifenox, Ethalfluralin, Metolachlor, Oxyfluorfen, Pretilachlor, Thenylchlor and Trifluralin Residues in Agricultural Commodities Using GC-ECD/MS (GC-ECD/MS를 이용한 농산물 중 Bifenox, Ethalfluralin, Metolachlor, Oxyfluorfen, Pretilachlor, Thenylchlor 및 Trifluralin의 동시 분석)

  • Ahn, Kyung Geun;Kim, Gi Ppeum;Hwang, Young Sun;Kang, In Kyu;Lee, Young Deuk;Choung, Myoung Gun
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.2
    • /
    • pp.104-116
    • /
    • 2018
  • BACKGROUND: This experiment was conducted to establish a simultaneous analysis method for 7 kinds of herbicides in 3 different classes having similar physicochemical property as diphenyl ether(bifenox and oxyfluorfen), dinitroaniline (ethalfluralin and trifluralin), and chloroacetamide (metolachlor, pretilachlor, and thenylchlor) in crops using GC-ECD/MS. METHODS AND RESULTS: All the 7 pesticide residues were extracted with acetone from representative samples of five raw products which comprised apple, green pepper, Kimchi cabbage, hulled rice and soybean. The extract was diluted with saline water and directly partitioned into n-hexane/dichloromethane(80/20, v/v) to remove polar co-extractives in the aqueous phase. For the hulled rice and soybean samples, n-hexane/acetonitrile partition was additionally employed to remove non-polar lipids. The extract was finally purified by optimized Florisil column chromatography. The analytes were separated and quantitated by GLC with ECD using a DB-1 capillary column. Accuracy and precision of the proposed method was validated by the recovery experiment on every crop samples fortified with bifenox, ethalfluralin, metolachlor, oxyfluorfen, pretilachlor, thenylchlor, and trifluralin at 3 concentration levels per crop in each triplication. CONCLUSION: Mean recoveries of the 7 pesticide residues ranged from 75.7 to 114.8% in five representative agricultural commodities. The coefficients of variation were all less than 10%, irrespective of sample types and fortification levels. Limit of quantitation (LOQ) of the analytes were 0.004 (etahlfluralin and trifluralin), 0.008 (metolachlor and pretilachlor), 0.006 (thenylchlor), 0.002 (oxyfluorfen), and 0.02 (bifenox) mg/kg as verified by the recovery experiment. A confirmatory technique using GC/MS with selected-ion monitoring was also provided to clearly identify the suspected residues. Therefore, this analytical method was reproducible and sensitive enough to determine the residues of bifenox, ethalfluralin, metolachlor, oxyfluorfen, pretilachlor, thenylchlor, and trifluralin in agricultural commodities.

Qualitative and quantitative PCR detection of insect-resistant genetically modified rice Agb0101 developed in korea (해충저항성 유전자변형 벼 Agb0101에 대한 PCR 검정)

  • Shin, Kong-Sik;Lee, Jin-Hyoung;Lim, Myung-Ho;Woo, Hee-Jong;Qin, Yang;Suh, Seok-Cheol;Kweon, Soon-Jong;Cho, Hyun-Suk
    • Journal of Plant Biotechnology
    • /
    • v.40 no.1
    • /
    • pp.18-26
    • /
    • 2013
  • Genetically modified (GM) rice Agb0101, which expresses the insecticidal toxin modified cry1Ac (mcry1Ac1) gene, was developed by the Rural Development Administration in Korea. To monitor the probable release of Agb0101 in the future, it is necessary to develop a reliable detection method. Here, we developed the PCR detection method for monitoring and tracing of GM rice. The primer pair (RBEgh-1/-2) from a starch branching enzyme (RBE4) gene was designed as an endogenous reference, giving rise to an expected PCR amplicon of 101 bp. For the qualitative PCR detection, construct- and event-specific primers were designed on the basis of integration sequence of T-DNA. Event-specific PCRs amplified specifically 5'- or 3'-junction region spanning the native genome DNA and the integrated gene construct, while none of amplified product was shown on crops, rice varieties, and other insect-resistant transgenic rice lines. The event-specific real-time PCR method was performed using TaqMan probe and plasmid pRBECrR containing both rice endogenous gene RBE4 sequence and 5'-junction sequence as the reference molecule. The absolute limit of quantification (LOQ) of real-time PCR was established with around 10 copies for one plasmid molecule pRBECrR. Thereafter, the different amounts of transgenic rice (1, 3, 5, and 10%, respectively) were quantified by using the established real-time PCR method, with a range below 19.55% of the accuracy expressed as bias, 0.06-0.40 of standard deviation (SD) and 3.80-7.01% of relative standard deviations (RSD), respectively. These results indicate that the qualitative and quantitative PCR methods could be used effectively to detect the event Agb0101 in monitoring and traceability.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF