• Title/Summary/Keyword: Use Instructions

Search Result 247, Processing Time 0.026 seconds

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Instructional Influences of Metacognitive Learning Strategies in Elementary School Science Course (초등학교 자연 수업에서 메타인지 학습 전략의 효과)

  • Noh, Tae-Hee;Jang, Shin-Ho;Lim, Hee-Jun
    • Journal of The Korean Association For Science Education
    • /
    • v.18 no.2
    • /
    • pp.173-182
    • /
    • 1998
  • This study investigated the influences of metacognitive learning strategies upon 6th-graders' achievement, science process skill, use of cognitive strategies, use of metacognitive strategies, self-efficacy, intrinsic value, attitude toward science class, and scientific attitude. The metacognitive learning strategies were developed on the basis of previous results and modified in a pilot study. Before the instructions, a pretest of motivation was administered, and used as a blocking variable. The score of previous achievement test was used as covariates for achievement and science process skill. Tests of use of cognitive strategies, use of metacognitive strategies, self-efficacy, intrinsic value, attitude toward science class, and scientific attitude were also administered, and their scores were used as covariates. After the instructions, a researcher-made achievement test, the Middle Grades Integrated Science Process Skills Test, and post-tests of above variables were administrated. Two-way ANCOVA results revealed that the scores of the treatment group were significantly higher than those of the control group for all tests except for science process skill. No interactions between the treatment and the level of the previous motivation were found. Educational implications are discussed.

  • PDF

Perceptions of Freshmen Students on the Use of A University Library Site (신입생들의 대학도서관 사이트에 관한 인식)

  • Kim, Yang-Woo
    • Journal of Korean Library and Information Science Society
    • /
    • v.37 no.4
    • /
    • pp.181-200
    • /
    • 2006
  • This study examined the perceptions of freshmen students on the use of a university library site. The students were in the beginning month of their undergraduate program in Hansung University without being taught a formal library instruction session. The results show users experienced uncertainty, difficultly, confusion and anxiety while they were interacting with various functions of the library site. The findings revealed that the users' perceptions were originated from: (1) the insufficiency of the students' knowledge and skills related to library use and (2) the inadequacy of the library system features, in particular its interface features. Based on the findings, implications to improve library instructional services and system interfaces are suggested.

  • PDF

Evaluation of fitness in implant screw as tightening torque in dental laboratory (기공실에서의 임플란트 토크값에 따른 적합도 평가)

  • Song, Young-Gyun
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.31 no.4
    • /
    • pp.310-315
    • /
    • 2015
  • Purpose: The purpose of this study was to measure the tightening torque for dental implant in dental laboratory and to analyze of the effects of different tightening torque. Materials and Methods: The tightening torque for dental implant in dental laboratory were measured by digital torque gauge. The length of abutment and analog were measured as tightening torque of manufacturer's instructions and the measured value. And the data were statically analyzed. Results: The mean tightening torque of implant screw in dental laboratory was $1.563{\pm}0.332Ncm$. The external type implant system of total length were showing no significant differences but the internal type implant system had difference significant (P < 0.05) when compared with tightening torque. Conclusion: The implant prosthesis should be made under manufacturer's instructions especially as tightening torque of screw. For the fidelity of implant prosthesis, dental technician should learn how to use the torque gauge.

The Effect of an Instruction Using Analog Systematically in Middle School Science Class (중학교 과학 수업에서 비유물을 체계적으로 사용한 수업의 효과)

  • Noh, Tae-Hee;Kwon, Hyeok-Soon;Lee, Seon-Uk
    • Journal of The Korean Association For Science Education
    • /
    • v.17 no.3
    • /
    • pp.323-332
    • /
    • 1997
  • In order to use analog more systematically in science class, an instructional model was designed on the basis of analogical reasoning processes (encoding, inference, mapping, application, and response) in the Sternberg's component process theory. The model has five phases (introducing target context, cue retrieval of analog context, mapping similarity and drawing target concept, application, and elaboration), and the instructional effects of using the model upon students' comprehension of science concepts and motivation level of learning were investigated. The treatment and control groups (1 class each) were selected from 8th-grade classes and taught about chemical change and chemical reaction for the period of 10 class hours. The treatment group was taught with the materials based on the model, while the control group was taught in traditional instruction without using analog. Before the instructions, modified versions of the Patterns of Adaptive Learning Survey and the Group Assessment of Logical Thinking were administered, and their scores were used as covariates for students' conceptions and motivational level of learning, respectively. Analogical reasoning ability test was also administered, and its score was used as a blocking variable. After the instructions, students' conceptions were measured by a researcher-made science conception test, and their motivational level of learning was measured by a modified version of the Instructional Materials Motivation Scale. The results indicated that the adjusted mean score of the conception test for the treatment group was significantly higher than that of the control group at .01 level of significance. No significant interaction between the instruction and the analogical reasoning ability was found. Although the motivational level of learning for the treatment group was higher than that for the control group, the difference was found to be statistically insignificant. Educational implications are discussed.

  • PDF

An automatic detection scheme of anti-debugging routines to the environment for analysis (분석 환경에 따른 안티 디버깅 루틴 자동 탐지 기법)

  • Park, Jin-Woo;Park, Yong-Su
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.47-54
    • /
    • 2014
  • Anti-debugging is one of the techniques implemented within the computer code to hinder attempts at reverse engineering so that attackers or analyzers will not be able to use debuggers to analyze the program. The technique has been applied to various programs and is still commonly used in order to prevent malware or malicious code attacks or to protect the programs from being analyzed. In this paper, we will suggest an automatic detection scheme for anti-debugging routines. With respect to the automatic detection, debuggers and a simulator were used by which trace information on the Application Program Interface(API) as well as executive instructions were extracted. Subsequently, the extracted instructions were examined and compared so as to detect points automatically where suspicious activity was captured as anti-debugging routines. Based on experiments to detect anti-debugging routines using such methods, 21 out of 25 anti-debugging techniques introduced in this paper appear to be able to detect anti-debugging routines properly. The technique in the paper is therefore not dependent upon a certain anti-debugging method. As such, the detection technique is expected to also be available for anti-debugging techniques that will be developed or discovered in the future.

A Vectorization Technique at Object Code Level (목적 코드 레벨에서의 벡터화 기법)

  • Lee, Dong-Ho;Kim, Ki-Chang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.5
    • /
    • pp.1172-1184
    • /
    • 1998
  • ILP(Instruction Level Parallelism) processors use code reordering algorithms to expose parallelism in a given sequential program. When applied to a loop, this algorithm produces a software-pipelined loop. In a software-pipelined loop, each iteration contains a sequence of parallel instructions that are composed of data-independent instructions collected across from several iterations. For vector loops, however the software pipelining technique can not expose the maximum parallelism because it schedules the program based only on data-dependencies. This paper proposes to schedule differently for vector loops. We develop an algorithm to detect vector loops at object code level and suggest a new vector scheduling algorithm for them. Our vector scheduling improves the performance because it can schedule not only based on data-dependencies but on loop structure or iteration conditions at the object code level. We compare the resulting schedules with those by software-pipelining techniques in the aspect of performance.

  • PDF

A Study on the Curricular Model Development for Information User Instruction in Korean Library and Information Science Education (한국문헌정보학 교육에서 정보이용자교육 교과과정 모형개발 연구)

  • Kim, Tae-Kyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.41 no.2
    • /
    • pp.387-412
    • /
    • 2007
  • Recently, countries have recognized the significant to instruct the information literacy to library users so that they can positively and actively use the information. However, among the courses of library and information science in the colleges of South Korea which educate the potential librarians, few courses have the objectives to make the students obtain the abilities to teach the information literacy in the future. Since this lack of appropriate educations is reflected on the fields of library, the librarians have many difficulties in preparing for and performing user instructions. As an alternative solution for this problem, this study suggests opening a course, information user instructions, in the curriculum of library and information science of the colleges in South Korea and concretely demonstrated the curricular model. Also the course syllabus displays when this course would open.

Variable Input Gshare Predictor based on Interrelationship Analysis of Instructions (명령어 연관성 분석을 통한 가변 입력 gshare 예측기)

  • Kwak, Jong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.19-30
    • /
    • 2008
  • Branch history is one of major input vectors in branch prediction. Therefore, the Proper use of branch history plays a critical role of improving branch prediction accuracy. To improve branch prediction accuracy, this paper proposes a new branch history management policy, based on interrelationship analysis of instructions. First of all, we propose three different algorithms to analyze the relationship: register-writhing method, branch-reading method, and merged method. Then we additionally propose variable input gshare predictor as an implementation of these algorithms. In simulation part, we provide performance differences among the algorithms and analyze their characteristics. In addition, we compare branch prediction accuracy between our proposals and conventional fixed input predictors. The performance comparison for optimal input branch predictor is also provided.

  • PDF

The Design and Simulation of Out-of-Order Execution Processor using Tomasulo Algorithm (토마술로 알고리즘을 이용하는 비순차실행 프로세서의 설계 및 모의실행)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.4
    • /
    • pp.135-141
    • /
    • 2020
  • Today, CPUs in general-purpose computers such as servers, desktops and laptops, as well as home appliances and embedded systems, consist mostly of multicore processors. In order to improve performance, it is required to use an out-of-order execution processor by Tomasulo algorithm as each core processor. An out-of-order execution processor with Tomasulo algorithm can execute the available instructions in any order and perform speculation in order to reduce control dependencies. Therefore, the performance of an out-of-order execution processor can be significantly improved compared to an in-order execution processor. In this paper, an out-of-order execution processor using Tomasulo algorithm and ARM instruction set is designed using VHDL record data types and simulated by GHDL. As a result, it is possible to successfully perform operations on programs written in ARM instructions.