• Title/Summary/Keyword: 비례 문제

Search Result 449, Processing Time 0.028 seconds

A Study on the Performance of Multicast Transmission Protocol using FEC Method and Local Recovery Method based on Receiver in Mobile Host (이동 호스트에서 FEC기법과 수신자 기반 지역복극 방식의 멀티캐스트 전송 프로토콜 연구)

  • 김회옥;위승정;이웅기
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.1
    • /
    • pp.68-76
    • /
    • 2002
  • Multicast in mobile host has the problem of hast mobility, multicast decision, triangle routing, tunnel convergence, implosion of retransmission, and bandwidth waste. In particular, the bandwidth waste in radio is a definite factor that decreases transmission rate. To solve the problems, this paper proposes a new multicast transmission protocol called FIM(Forward Error Correction Integrated Multicast), which supports reliable packet recovery mechanism by integrating If Mobility Support for the host mobility, IGMP(Interned Group Management Protocol) for the group management, and DVMRP(Distance Vector Multicast Routing Protocol) for the multicast routing, and it also uses FEC and the local recovery method based on receiver. The performance measurement is performed by dividing the losses into the homogeneous independent loss, the heterogeneous independent loss, and the shared source link loss model.. The result shows that the performances improves in proportion to the size of local areal group when the size of transmission group exceeds designated size. This indicates FIM is effective in the environment where there are much of data and many receivers in the mobile host.

  • PDF

Allocating CO2 Emission by Sector: A Claims Problem Approach (Claims problem을 활용한 부문별 온실가스 감축목표 분석)

  • Yunji Her
    • Environmental and Resource Economics Review
    • /
    • v.31 no.4
    • /
    • pp.733-753
    • /
    • 2022
  • Korean government established the Nationally Determined Contribution (NDC) in 2015. After revising in 2019, the government updated an enhanced target at the end of last year. When the NDC is addressed, the emission targets of each sector, such as power generation, industry, and buildings, are also set. This paper analyzes the emission target of each sector by applying a claims problem or bankruptcy problem developed from cooperative game theory. The five allocation rules from a claims problem are introduced and the properties of each rule are considered axiomatically. This study applies the five rules on allocating carbon emission by sector under the NDC target and compares the results with the announced government target. For the power generation sector, the government target is set lower than the emissions allocated by the five rules. On the other hand, the government target for the industry sector is higher than the results of the five rules. In other sectors, the government's targets are similar to the results of the rule that allocates emissions in proportion to each claim.

The Development of the Compensational Thinking Through the Compensation activities of 'Thinking Science' Program ('생각하는 과학' 프로그램의 보상 논리 활동에 의한 보상적 사고 수준 변화)

  • Kim, sun-Ja;Lee, Sang-Kwon;Park, Jong-Yoon;Kang, Seong-Joo;Choi, Byung-Soon
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.3
    • /
    • pp.604-616
    • /
    • 2002
  • The purpose of this study was to analyze the development of the compensational thinking by the compensation activities of 'Thinking Science' program. The 138 students were sampled in elementary schools and were divided into two groups, the experimental group of 74 students and the control group of 64 students. Both the compensation activities of the 'Thinking Science' program and a regular science curriculum were implemented to the experimental group, while only a regular science curriculum to the control group. Both experimental and control group were pre-tested with Science Reasoning Task II and compensational thinking test I and were post-tested with compensational thinking test II. This study revealed that the types of strategies used in compensation problem solving were categorized as illogical explanation, rule automation, proportionality, explanation in qualitative terms, additive quantification, inverse proportionality and were related to the context of the items. It was found that compensation activities of the 'Thinking Science' program were effective on the development of the compensational thinking.

Study on Applicability of Nonproportional Model for Teaching Second Graders the Number Concept (초등학교 2학년 수 개념 지도를 위한 비비례모델의 적용 가능성 탐색)

  • Kang, Teaseok;Lim, Miin;Chang, Hyewon
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.19 no.3
    • /
    • pp.305-321
    • /
    • 2015
  • This study started with wondering whether the nonproportional model used in unit assessment for 2nd graders is appropriate or not for them. This study aims to explore the applicability of the nonproportional model to 2nd graders when they learn about numbers. To achieve this goal, we analyzed elementary mathematics textbooks, applied two kinds of tests to 2nd graders who have learned three-digit numbers by using the proportional model, and investigated their cognitive characteristics by interview. The results show that using the nonproportional model in the initial stages of 2nd grade can cause some didactical problems. Firstly, the nonproportional models were presented only in unit assessment without any learning activity with them in the 2nd grade textbook. Secondly, the size of each nonproportional model wasn't written on itself when it was presented. Thirdly, it was the most difficult type of nonproportional models that was introduced in the initial stages related to the nonproportional models. Fourthly, 2nd graders tend to have a great difficulty understanding the relationship of nonproportional models and to recognize the nonproportional model on the basis of the concept of place value. Finally, the question about the relationship between nonproportional models sticks to the context of multiplication, without considering the context of addition which is familiar to the students.

A Design of Multiplication Unit of Elementary Mathematics Textbook by Making the Best Use of Diversity of Algorithm (알고리즘의 다양성을 활용한 두 자리 수 곱셈의 지도 방안과 그에 따른 초등학교 3학년 학생의 곱셈 알고리즘 이해 과정 분석)

  • Kang, Heung-Kyu;Sim, Sun-Young
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.14 no.2
    • /
    • pp.287-314
    • /
    • 2010
  • The algorithm is a chain of mechanical procedures, capable of solving a problem. In modern mathematics educations, the teaching algorithm is performing an important role, even though contracted than in the past. The conspicuous characteristic of current elementary mathematics textbook's manner of manipulating multiplication algorithm is exceeding converge to 'standard algorithm.' But there are many algorithm other than standard algorithm in calculating multiplication, and this diversity is important with respect to didactical dimension. In this thesis, we have reconstructed the experimental learning and teaching plan of multiplication algorithm unit by making the best use of diversity of multiplication algorithm. It's core contents are as follows. Firstly, It handled various modified algorithms in addition to standard algorithm. Secondly, It did not order children to use standard algorithm exclusively, but encouraged children to select algorithm according to his interest. As stated above, we have performed teaching experiment which is ruled by new lesson design and analysed the effects of teaching experiment. Through this study, we obtained the following results and suggestions. Firstly, the experimental learning and teaching plan was effective on understanding of the place-value principle and the distributive law. The experimental group which was learned through various modified algorithm in addition to standard algorithm displayed higher degree of understanding than the control group. Secondly, as for computational ability, the experimental group did not show better achievement than the control group. It's cause is, in my guess, that we taught the children the various modified algorithm and allowed the children to select a algorithm by preference. The experimental group was more interested in diversity of algorithm and it's application itself than correct computation. Thirdly, the lattice method was not adopted in the majority of present mathematics school textbooks, but ranked high in the children's preference. I suggest that the mathematics school textbooks which will be developed henceforth should accept the lattice method.

  • PDF

Fast Multi-View Synthesis Using Duplex Foward Mapping and Parallel Processing (순차적 이중 전방 사상의 병렬 처리를 통한 다중 시점 고속 영상 합성)

  • Choi, Ji-Youn;Ryu, Sae-Woon;Shin, Hong-Chang;Park, Jong-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1303-1310
    • /
    • 2009
  • Glassless 3D display requires multiple images taken from different viewpoints to show a scene. The simplest way to get multi-view image is using multiple camera that as number of views are requires. To do that, synchronize between cameras or compute and transmit lots of data comes critical problem. Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is an algorithm for generating various virtual viewpoint images using a limited number of views and depth maps. In this paper, because the virtual view image can be express as a transformed image from real view with some depth condition, we propose an algorithm to compute multi-view synthesis from two reference view images and their own depth-map by stepwise duplex forward mapping. And also, because the geometrical relationship between real view and virtual view is repetitively, we apply our algorithm into OpenGL Shading Language which is a programmable Graphic Process Unit that allow parallel processing to improve computation time. We demonstrate the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.

Study on Usefulness of Entrance Surface Dose (ESD), Entropy Analysis Method to Evaluate Ionization Chamber Performance and Implementation of Optimal Chamber Combination Model when using Automatic Exposure Control (AEC) Device in Digital Radiography (DR) (디지털 방사선 시스템(DR)의 자동노출제어장치 이용 시 이온 챔버의 성능 평가를 위한 엔트로피 분석법의 유용성과 최적의 챔버 조합 모델 구현 연구)

  • Hwang, Jun-Ho;Choi, Ji-An;Lee, Kyung-Bae
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.4
    • /
    • pp.375-383
    • /
    • 2020
  • This study aimed to propose a methodology for quantitatively analyzing problems resulting from the performance and combination of the ionization chamber when using an automatic exposure control (AEC) and to optimize the performance of the digital radiography (DR). In the experimental method, the X-ray quality of the parameters used for the examination of the abdomen and pelvis was evaluated by percentage average error (PAE) and half value layer (HVL). Then, the stability of the radiation output and the image quality were analyzed by calculating the entrance surface dose (ESD) and entropy when the three ionization chambers were combined. As a result, all of the X-ray quality of the digital radiography used in the experiment showed a percentage average error and a half value layer in the normal range. The entrance surface dose increased in proportion to the combination of chambers, and entropy increased in proportion to the combination of ionization chambers except when three chambers were combined. In conclusion, analysis using entrance surface dose and entropy was found to be a useful method for evaluating the performance and combination problems of the ionization chamber, and the optimal performance of the digital radiography can be maintained when two or less ionization chambers are combined.

Analysis of Problems in the Submicro Representations of Acid·Base Models in Chemistry I and II Textbooks of the 2009 & 2015 Revised Curricula (2009 개정교육과정과 2015 개정교육과정의 화학 I 및 화학 II 교과서에서 산·염기 모델의 준미시적 표상에 대한 문제점 분석)

  • Park, Chul-Yong;Won, Jeong-Ae;Kim, Sungki;Choi, Hee;Paik, Seoung-Hey
    • Journal of the Korean Chemical Society
    • /
    • v.64 no.1
    • /
    • pp.19-29
    • /
    • 2020
  • We analyzed the representations of acid-base models in 4 kinds of Chemistry I and 4 kinds of Chemistry II textbooks of the 2009 revised curriculum, and 9 kinds of Chemistry I textbooks and 6 kinds of chemistry II textbooks of the 2015 revised curriculum in this study. The problems of the textbook were divided into the problems of definitions and the representations of the logical thinking. As a result of the study, the lack of the concept of chemical equilibrium had a problem with the representation of reversible reactions in the definition of the Brønsted-Lowry model in the Chemistry I textbooks of 2009 revised curriculum, it also appeared to persist in Chemistry I textbooks of 2015 revised curriculum which contains the concept of chemical equilibrium. The representations of logical thinking were related to particle kinds of conservation logic, combinational logic, particle number conservation logic, and proportion logic. There were few problems related to representation of logical thinking in Chemistry I textbook in 2009 revision curriculum, but more problems of representations related to logics are presented in Chemistry I textbooks in 2015 revision curriculum. Therefore, as the curriculum is revised, the representations of chemistry textbooks related to acid and base models need to be changed in a way that can help students' understanding.

Operational Definition of Components of Logical Thinking in Problem-solving Process on Informatics Subject (정보 교과의 문제해결과정에서 논리적 사고력 구성요소에 대한 조작적 정의)

  • Yoon, Il-Kyu;Kim, Jong-Hye;Lee, Won-Gyu
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.2
    • /
    • pp.1-14
    • /
    • 2010
  • Previous researches on the improvement of logical thinking in Informatics subject have used general logical thinking test and only limited improvement of logical thinking by programming learning result. In this study, the operational definition of the logical thinking in problem-solving process on Informatics education is different from the general logical thinking and the logical thinking of the other subjects. Firstly, we suggested the operational definition of components of logical thinking using the open questionnaire by expert and research team discussion. Also, we suggested the relationship between the operational definition and contents of the 'problem-solving methods and procedure' section in secondary Informatics subject. Finally, this study developed the evaluation contents based on the operational definition of components of logical thinking. The components of logical thinking which was required in problem-solving process on Informatics subject were ordering reasoning, propositional logic, controlling variables, combinatorial logic, proportional reasoning. We suggested the relationship between operational definition and problem-solving process and assessment of logical thinking in problem-solving process on Informatics subject. This paper will give meaningful insight to supply the guideline of the teaching strategy and evaluation methods for improving the logical thinking in Informatics education.

  • PDF

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.