• Title/Summary/Keyword: Explicit Function

Search Result 359, Processing Time 0.024 seconds

Analysis of External Representations in Matter Units of 7th Grade Science Textbooks Developed Under the 2015 Revised National Curriculum (2015 개정 교육과정에 따른 7학년 과학교과서 물질 영역에 제시된 외적 표상의 분석)

  • Yoon, Heojeong
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.1
    • /
    • pp.61-75
    • /
    • 2020
  • In this study, external representation presented in two units, 'Property of Gas' and 'Changes of States of Matter,' in seventh grade of 2015 revised science curriculum, were analyzed to suggest educational implications. External representations presented in five science textbooks were analyzed according to the six criteria, which were 'type of representation,' 'interpretation of surface features,' 'relatedness to text,' 'existence and properties of a caption,' 'degree of correlation between representations comprising a multiple one,' and 'function of representation.' The characteristics of typical representations related to each achievement standard of two units were also analyzed. The results were as follows: The macro representations for 'type of representation', and explicit representations for 'interpretation of surface features' showed highest frequency. For 'relatedness to text' criteria, 'completely related and linked' and 'completely related and unlinked' representations showed the highest frequency. It means that most representations were properly related with the text. There were appropriate captions for most representations. The degree of correlation between representations comprising a multiple one was largely sufficiently linked with regards to the criteria 'degree of correlation between representations comprising a multiple one'. The complete representations for 'function of representation' showed the highest frequency in the aggregate, however incomplete representations showed more frequencies in the inquiry parts. The typical representations for each achievement standard differed in terms of the type, contained information, used symbols and so on. The educational implications with the use of representations presented in seventh grade textbook were discussed.

A Study on the Product differentiation Process by the Structuring of Design Factors (디자인 인자의 구조화에 의한 제품 차별화 프로세스 연구)

  • Kim, Hyun
    • Archives of design research
    • /
    • v.13 no.2
    • /
    • pp.73-80
    • /
    • 2000
  • In this study design information was separately defined form general product information and thus factors reflected in product design ion the basis of values and roles were extracted. The following is a classification of 5 different types of design factors divided according to their disposition. ·Innovation factor - element which previously did not exist or element related with explicit reformation ·Open factor - active element which not only improves current performance but also induces new functions through understanding of usage situations and new possibilities. ·Anterior factor - element which prolongs and develops the early development requirements of products through C.I. and P.I. related elements and characteristics of previous models and design strategy. Self-evidence factor - element related with function visualization through product structure which may make possible the consolidation of shape and function. Rigid factor - element, based on the human factors engineering, related with the safety and efficiency of users. This classification was obtained by defining major characteristics of products considering the target consumer and market characteristics. In this classification factor structuring design process which efficiently deducted a differentiated final product by synthesizing factors of higher importance as dominant factors was proposed. With this kind of factor structuring process, product differentiation may be achieved by bestowing individual characteristics to each product by combining design dominant factors associated with the product for a specific purpose from the stages of product concept development. Moreover, this may be used as an approach to actively correspond to the various and specific demands of the comsumer.

  • PDF

Three-dimensional thermal-hydraulics/neutronics coupling analysis on the full-scale module of helium-cooled tritium-breeding blanket

  • Qiang Lian;Simiao Tang;Longxiang Zhu;Luteng Zhang;Wan Sun;Shanshan Bu;Liangming Pan;Wenxi Tian;Suizheng Qiu;G.H. Su;Xinghua Wu;Xiaoyu Wang
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.4274-4281
    • /
    • 2023
  • Blanket is of vital importance for engineering application of the fusion reactor. Nuclear heat deposition in materials is the main heat source in blanket structure. In this paper, the three-dimensional method for thermal-hydraulics/neutronics coupling analysis is developed and applied for the full-scale module of the helium-cooled ceramic breeder tritium breeding blanket (HCCB TBB) designed for China Fusion Engineering Test Reactor (CFETR). The explicit coupling scheme is used to support data transfer for coupling analysis based on cell-to-cell mapping method. The coupling algorithm is realized by the user-defined function compiled in Fluent. The three-dimensional model is established, and then the coupling analysis is performed using the paralleled Coupling Analysis of Thermal-hydraulics and Neutronics Interface Code (CATNIC). The results reveal the relatively small influence of the coupling analysis compared to the traditional method using the radial fitting function of internal heat source. However, the coupling analysis method is quite important considering the nonuniform distribution of the neutron wall loading (NWL) along the poloidal direction. Finally, the structure optimization of the blanket is carried out using the coupling method to satisfy the thermal requirement of all materials. The nonlinear effect between thermal-hydraulics and neutronics is found during the blanket structure optimization, and the tritium production performance is slightly reduced after optimization. Such an adverse effect should be thoroughly evaluated in the future work.

Simple Formulae for Buckling and Ultimate Strength Estimation of Plates Subjected to Water Pressure and Uniaxial Compression (수압(水壓)과 압축력(壓縮力)을 받는 평판(平板)의 좌굴(挫屈) 및 최종강도(最終强度) 추정식(推定式))

  • Jeom-K.,Paik;Chang-Y.,Kim
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.25 no.4
    • /
    • pp.69-80
    • /
    • 1988
  • This paper proposes simple formulae for buckling and ultimate strength estimation of plates subjected to water pressure and uniaxial compression. For the construction of a formula for elastic buckling strength estimation, parametric study for actual ship plates with varying aspect ratios and the magnitude of water pressure is carried out by means of principle of minimum potential energy. Based on the results by parametric study, a new formula is approximately expressed as a continuous function of loads and aspect ratio. On the other hand, in order to get a formula for ultimate strength estimation, in-plane stress distribution of plates is investigated through large deflection analysis and total in-plane stresses are expressed as an explicit form. By applying Mises's plasticity condition, ultimate strength criterion is then derives. In the case of plates under relatively small water pressure, the results by the proposed formulae are in good agreement compared with those by other methods and experiment. But present formula overestimates the ultimate strength in the range of large water pressure. However, actual ship plates are subjected to relatively small water pressure except for the impact load due to slamming etc.. Therefore, it is considered that present formulae can be applied for the practical use.

  • PDF

Personalized Media Control Method using Probabilistic Fuzzy Rule-based Learning (확률적 퍼지 룰 기반 학습에 의한 개인화된 미디어 제어 방법)

  • Lee, Hyong-Euk;Kim, Yong-Hwi;Lee, Tae-Youb;Park, Kwang-Hyun;Kim, Yong-Soo;Cho, Joon-Myun;Bien, Z. Zenn
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.244-251
    • /
    • 2007
  • Intention reading technique is essential to provide personalized services toward more convenient and human-friendly services in complex ubiquitous environment such as a smart home. If a system has knowledge about an user's intention of his/her behavioral pattern, the system can provide mote qualified and satisfactory services automatically in advance to the user's explicit command. In this sense, learning capability is considered as a key function for the intention reading technique in view of knowledge discovery. In this paper, ore introduce a personalized media control method for a possible application iii a smart home. Note that data pattern such as human behavior contains lots of inconsistent data due to limitation of feature extraction and insufficiently available features, where separable data groups are intermingled with inseparable data groups. To deal with such a data pattern, we introduce an effective engineering approach with the combination of fuzzy logic and probabilistic reasoning. The proposed learning system, which is based on IFCS (Iterative Fuzzy Clustering with Supervision) algorithm, extract probabilistic fuzzy rules effectively from the given numerical training data pattern. Furthermore, an extended architectural design methodology of the learning system incorporating with the IFCS algorithm are introduced. Finally, experimental results of the media contents recommendation system are given to show the effectiveness of the proposed system.

Quasi-fiscal Activities of the Bank of Korea (한국은행의 준(準)재정활동)

  • Koh, Youngsun
    • KDI Journal of Economic Policy
    • /
    • v.25 no.1
    • /
    • pp.99-145
    • /
    • 2003
  • Quasi-fiscal activities (QFAs) refer to those activities that public corporations carry out to achieve policy objectives of the government. QFAs often lead to the understatement of the government involvement in the economy and the overstatement of its financial balance, thereby lowering fiscal transparency and hiding fiscal risks. Central banks, as public corporations, perform various QFAs in many countries. I define QFAs in this case as those activities that are not directly related to the intrinsic function of central banks, whose responsibility lies in the administration of monetary policy and the provision of banking services for the government and commercial banks. In Korea, the Bank of Korea (BOK) has been an active source of QFAs. Of particular importance are the policy loans to commercial banks to promote their lending to small- and medium-sized enterprises and others. The outstanding stock of policy loans increased rapidly in the aftermath of the recent economic crisis, and stood at 7.6 trillion won (20 percent of the reserve money) at the end of 2002. Another important QFA by BOK stems from the transfer of part of its profits to the central government. The accumulated transfer during 1998-2002 amounted to 9.9 trillion won. My calculation shows that if these and other QFAs had been carried out by the government as explicit fiscal activities, the consolidated central government financial balance would have been below the actual balance by about 0.5 percent of GDP in each year since the economic crisis. It is suggested that the QFAs by BOK be reduced in coming years not only to enhance fiscal transparency but also to expand the flexibility of BOK's reserve management. Abolishing policy loans and minimizing transfers to the government would be the first step in this direction. BOK should also consider paying interest on the government deposit held in BOK.

  • PDF

A Study on the Analysis of the Error in Photometric Stereo Method Caused by the General-purpose Lighting Environment (測光立體視法에서 범용조명원에 기인한 오차 해석에 관한 연구)

  • Kim, Tae-Eun;Chang, Tae-Gyu;Choi, Jong-Soo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.53-62
    • /
    • 1994
  • This paper presents a new approach of analyzing errors resulting from nonideal general-purpose lighting environment when the Photometric Stereo Method (PSM) is applied to estimate the surface-orientation of a three-dimensional object. The approach introduces the explicit modeling of the lighting environment including a circular-disk type irradiance object plane and the direct simulation of the error distribution with the model. The light source is modeled as a point source that has a certain amount of beam angle, and the luminance distribution on the irradiance plane is modeled as a Gaussian function with different deviation values. A simulation algorithm is devised to estimate the light source orientation computing the average luminance intensities obtained from the irradiance object planes positioned in three different orientations. The effect of the nonideal lighting model is directly reflected in such simulation, because of the analogy between the PSM and the proposed algorithm. With an instrumental tool designed to provide arbitrary orientations of the object plane at the origin of the coordinate system, experiment can be performed in a systematic way for the error analysis and compensation. Simulations are performed to find out the error distribution by widely varying the light model and the orientation set of the object plane. The simulation results are compared with those of the experiment performed in the same way as the simulation. It is confirmed from the experiment that a fair amount of errors is due to the erroneous effect of the general-purpose lighting environment.

  • PDF

Policy Modeling for Efficient Reinforcement Learning in Adversarial Multi-Agent Environments (적대적 멀티 에이전트 환경에서 효율적인 강화 학습을 위한 정책 모델링)

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.3
    • /
    • pp.179-188
    • /
    • 2008
  • An important issue in multiagent reinforcement learning is how an agent should team its optimal policy through trial-and-error interactions in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for multiagent reinforcement teaming tend to apply single-agent reinforcement learning techniques without any extensions or are based upon some unrealistic assumptions even though they build and use explicit models of other agents. In this paper, basic concepts that constitute the common foundation of multiagent reinforcement learning techniques are first formulated, and then, based on these concepts, previous works are compared in terms of characteristics and limitations. After that, a policy model of the opponent agent and a new multiagent reinforcement learning method using this model are introduced. Unlike previous works, the proposed multiagent reinforcement learning method utilize a policy model instead of the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper. the Cat and Mouse game is introduced as an adversarial multiagent environment. And effectiveness of the proposed multiagent reinforcement learning method is analyzed through experiments using this game as testbed.

Prediction of Lung Cancer Based on Serum Biomarkers by Gene Expression Programming Methods

  • Yu, Zhuang;Chen, Xiao-Zheng;Cui, Lian-Hua;Si, Hong-Zong;Lu, Hai-Jiao;Liu, Shi-Hai
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9367-9373
    • /
    • 2014
  • In diagnosis of lung cancer, rapid distinction between small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) tumors is very important. Serum markers, including lactate dehydrogenase (LDH), C-reactive protein (CRP), carcino-embryonic antigen (CEA), neurone specific enolase (NSE) and Cyfra21-1, are reported to reflect lung cancer characteristics. In this study classification of lung tumors was made based on biomarkers (measured in 120 NSCLC and 60 SCLC patients) by setting up optimal biomarker joint models with a powerful computerized tool - gene expression programming (GEP). GEP is a learning algorithm that combines the advantages of genetic programming (GP) and genetic algorithms (GA). It specifically focuses on relationships between variables in sets of data and then builds models to explain these relationships, and has been successfully used in formula finding and function mining. As a basis for defining a GEP environment for SCLC and NSCLC prediction, three explicit predictive models were constructed. CEA and NSE are requentlyused lung cancer markers in clinical trials, CRP, LDH and Cyfra21-1 have significant meaning in lung cancer, basis on CEA and NSE we set up three GEP models-GEP 1(CEA, NSE, Cyfra21-1), GEP2 (CEA, NSE, LDH), GEP3 (CEA, NSE, CRP). The best classification result of GEP gained when CEA, NSE and Cyfra21-1 were combined: 128 of 135 subjects in the training set and 40 of 45 subjects in the test set were classified correctly, the accuracy rate is 94.8% in training set; on collection of samples for testing, the accuracy rate is 88.9%. With GEP2, the accuracy was significantly decreased by 1.5% and 6.6% in training set and test set, in GEP3 was 0.82% and 4.45% respectively. Serum Cyfra21-1 is a useful and sensitive serum biomarker in discriminating between NSCLC and SCLC. GEP modeling is a promising and excellent tool in diagnosis of lung cancer.

The Trend and Prospect of the Nursing Intervention Classification (간호중재분류의 동향과 전망)

  • Park, Sung-Ae
    • Journal of Home Health Care Nursing
    • /
    • v.3
    • /
    • pp.75-85
    • /
    • 1996
  • Nursing Intervention Classification(NIC) includes the 433 intervention lists to standardize the nursing language. Efforts to standardize and classify nursing care are important because they make explicit what has previously been implicit, assumed and unknown. NIC is a standardized language of both nurse-initiated and physician-initiated nursing treatments. Each of the 433 interventions has a label, definition and set of activities that a nurse does to carry it out. It defines the interventions performed by all nurses no matter what their setting or specialty. Principles of label, definition and activity construction were established so there is consistency across the classification. NIC was developed for following reasons; 1. Standandization of the nomen clature of nursing treatments. 2. Expansion of nursing knowledge about the links between diagnoses, treatments and outcomes. 3. Devlopment of nursing and health care information systems. 4. Teaching decision making to nursing students. 5. Determination of the costs of service provided by nurses. 6. Planning for resources needed in nursing practice settings. 7. Language to communicate the unigue function of nursing. 8. Articulation with the classification systems of other health care providers. The process of NIC development ; 1. Develop implement and evaluate an expert review process to evaluate feedback on specific interventions in NIC and to refine the interventions and classification as feedback indicates. 2. Define and validate indirect care interventions. 3. Refine, validate and publish the taxonomic grouping for the interventions. 4. Translate the classification into a coding system that can be used for computerization for articulation with other classifications and for reimbursement. 5. Construct an electronic version of NIC to help agencies in corporate the classifiaction into nursing information systems. 6. Implement and evaluate the use of the classification in a nursing information system in five different agencies. 7. Establish mechanisms to build nursing knowledge through the analysis of electronically retrievable clinical data. 8. Publish a second edition of the nursing interventions classification with taxonomic groupings and results of field testing. It is suggested that the following researches are needed to develp NIC in Korea. 1. To idenilfy the intervention lists in Korea. 2. Nursing resources to perform the nursing interventions. 3. Comparative study between Korea and U.S.A. on NIC. 4. Linkage among nursing diagnosis, nursing interventions and nursing outcomes. 5. Linkage between NIC and other health care information systems. 6. determine nursing costs on NIC.

  • PDF