• Title/Summary/Keyword: Maximum specific work

Search Result 97, Processing Time 0.026 seconds

A Method to Calculate a Pass Rate of the ${\gamma}$-index Analysis in Tomotherapy Delivery Quality Assurance (DQA) (단층치료기를 이용한 방사선 치료의 환자별 정도관리 평가를 위한 감마인덱스의 정량화 방법)

  • Park, Dahl;Kim, Yong-Ho;Kim, Won-Taek;Kim, Dong-Won;Kim, Dong-Hyun;Jeon, Ho-Sang;Nam, Ji-Ho;Lim, Sang-Wook
    • Progress in Medical Physics
    • /
    • v.21 no.4
    • /
    • pp.340-347
    • /
    • 2010
  • DQA, a patient specific quality assurance in tomotherapy, is usually performed using an ion chamber and a film. The result of DQA is analysed with the treatment planning system called Tomo Planning Station (TomoPS). The two-dimensional dose distribution of film measurement is compared with the dose distribution calculated by TomoPS using the ${\gamma}$-index analysis. In ${\gamma}$-index analysis, the criteria such as 3%/3 mm is used and we verify that whether the rate of number of points which pass the criteria (pass rate) is within tolerance. TomoPS does not provide any quantitative information regarding the pass rate. In this work, a method to get the pass rate of the ${\gamma}$-index analysis was suggested and a software PassRT which calculates the pass rate was developed. The results of patient specific QA of the intensity modulated radiation therapy measured with I'mRT MatriXX (IBA Dosimetry, Germany) and DQA of tomotherapy measured with film were used to verify the proposed method. The pass rate was calculated using PassRT and compared with the pass rate calculated by OmniPro I'mRT (IBA Dosimetry, Germany). The average difference between the two pass rates was 0.00% for the MatriXX measurement. The standard deviation and the maximum difference were 0.02% and 0.02%, respectively. For the film measurement, average difference, standard deviation and maximum difference were 0.00%, 0.02% and 0.02%, respectively. For regions of interest smaller than $24.3{\times}16.6cm^2$ the proposed method can be used to calculate the pass rate of the gamma index analysis to one decimal place and will be helpful for the more accurate DQA in tomotherapy.

A Study on the Improvement of Flexible Working Hours (탄력적 근로시간제 개선에 대한 연구)

  • Kwon, Yong-man
    • Journal of Venture Innovation
    • /
    • v.5 no.3
    • /
    • pp.57-70
    • /
    • 2022
  • In modern industrial capitalism, the relationship between the provision of work and the receipt of wages has become an important principle governing society. According to the labor contract, the wages provided by entrusting the right to dispose of one's labor to the employer are directly compensated, and human life should be guaranteed and reproduced with proper rest. The establishment of labor relations under free contracts represents a problem in protecting workers, and accordingly, the maximum of working hours is set as a minimum right for workers, and the standard for minimum rest is set and assigned. The reduction of working hours is very important in terms of the quality of life of workers, but it is also an important issue in efficient corporate activities. As of 2020, Korea has 1,908 hours of annual working hours, the third lowest among OECD 37 countries in the happiness index surveyed by the Sustainable Development Solution Network(SDSN), an agency under the United Nations. Accordingly, the necessity of reducing working hours has been recognized, and the maximum working hours per week has been limited to 52 hours since 2018. In this situation, various working hours are legally excluded as a way to maintain the company's value-added creation and meet the diverse needs of workers, and Korea's Labor Standards Act restricts flexible working hours within three months, flexible working hours exceeding three months, selective working hours, and extended working hours. However, in the discussion on the application of the revised flexible working hours system in 2021 and the expansion of the settlement unit period recently discussed, there is a problem with the flexible working hours system, which needs to be improved. Therefore, this paper aims to examine the problems of the flexible working hours system and improvement measures. The flexible working hours system is a system that does not violate working hours even if the legal working hours are exceeded on a specific day or week according to a predetermined standard, and does not have to pay additional wages for excessive overtime work. It is mainly useful as a form of shift work in manufacturing, sales service, continuous business or electricity, gas, water, and transportation for long-term operations. It is also used as a way to shorten working hours, such as expanding holidays through short working days. However, if the settlement unit period is expanded, it is disadvantageous to workers as the additional wages that workers can receive will not be received. Therefore, First, in order to expand the settlement unit period currently under discussion, additional wages should be paid for the period expanded from the current standard. Second, it is necessary to improve the application of the flexible working hours system to individual workers to have sufficient consultation with individual workers in a written agreement with the worker representative, Third, clarify the allowable time for extended work during the settlement unit period, and Fourth, limit the daily working hours or apply to continuous rest. In addition, since the written agreement of the worker representative is an important issue in the application of the flexible working hours system, it is necessary to secure the representation of the worker representative.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.

The Significance of Korean Proverb and Riddle in the sense of Bias (편향의 관점에서 본 한국의 속담과 수수께끼)

  • Kim, Kyung-Seop;Kim, Jeong-Lae
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.35-42
    • /
    • 2017
  • Behavior Economics, a branch of social sciences, which seeks to find the answers about why man sometimes does absurd economy-related things, came into existence through combining economics and psychology. To the contrary of the traditional economics', behavior economics has developed by explaining how man makes economy-related choices by means of applying their own cognitive principles. Individuals lack the information on the goods and services in the market, and don't know how to make best use of the obtained information, failing to achieve maximum utility. Therefore, man's rationality is meant to be confined to bounded rationality. It is the very Heuristic that does work in the process of this simplified decision making process. Heuristic utilizes established empirical notion and specific information, and that's why there can be cognitive biases sometimes leading to inaccurate judgment. As Oral Literature is basically based on heavy guesswork and perceptual biases of general public, it is imperative to contemplate oral literature in the framework of Heuristic of behavior economics. This thesis deals with thinking types and behavioral patterns of the short-piece proverbs, folklore language-game riddles on the basis of personal or public memory. As a result, it is evident that proverbs point out biases arising from human behaviors, while riddles make full or active use of biases.

Analysis of the CREOLE experiment on the reactivity temperature coefficient of the UO2 light water moderated lattices using Monte Carlo transport calculations and ENDF/B-VII.1 nuclear data library

  • El Ouahdani, S.;Erradi, L.;Boukhal, H.;Chakir, E.;El Bardouni, T.;Boulaich, Y.;Ahmed, A.
    • Nuclear Engineering and Technology
    • /
    • v.52 no.6
    • /
    • pp.1120-1130
    • /
    • 2020
  • The CREOLE experiment performed In the EOLE critical facility located In the Nuclear Center of CADARACHE - CEA have allowed us to get interesting and complete experimental information on the temperature effects in the light water reactor lattices. To analyze these experiments with accuracy an elaborate calculation scheme using the Monte Carlo method implemented in the MCNP6.1 code and the ENDF/B-VII.1 cross section library has been developed. We have used the ENDF/B-VII.1 data provided with the MCNP6.1.1 version in ACE format and the Makxsf utility to handle the data in the specific temperatures not available in the MCNP6.1.1 original library. The main purpose of this analysis is the qualification of the ENDF/B-VII.1 nuclear data for the prediction of the Reactivity Temperature Coefficient while ensuring the ability of the MCNP6.1 system to model such a complex experiment as CREOLE. We have analyzed the case of UO2 lattice with 1166 ppm of boron in ordinary water moderator in specified temperatures. A detailed comparison of the calculated effective multiplication factors with the reference ones [1] in room temperature presented in this work shows a good agreement demonstrating the validation of our 3D calculation model. The discrepancies between calculations and the differential measurements of the Reactivity Temperature Coefficient for the analyzed configuration are relatively small: the maximum discrepancy doesn't exceed 1,1 pcm/℃. In addition to the analysis of direct differential measurements of the reactivity temperature coefficient performed in the poisoned UO2 lattice configuration, we have also analyzed integral measurements in UO2 clean lattice configuration using equivalency of the integral temperature reactivity worth with the driver core fuel reactivity worth and soluble boron reactivity worth. In this case both of the ENDF/B-VII.1 and JENDL.4 libraries were used in our analysis and the obtained results are very similar.

Noninvasive Method to Distinguish between Glucose and Sodium Chloride Solution Using Complementary Split-Ring Resonator (Complementary Split Ring Resonator(CSRR)를 이용한 포도당과 염화나트륨 수용액의 비침습적 구별)

  • Jang, Chorom;Park, Jin-Kwan;Yun, Gi-Ho;Yook, Jong-Gwan
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.4
    • /
    • pp.247-255
    • /
    • 2018
  • In this work, glucose solution and sodium chloride solution were distinguished noninvasively using a microwave complementary split-ring resonator (CSRR). Based on the electrical properties of the two solutions measured using a open-ended coaxial probe, a CSRR was designed and fabricated for operation at a specific frequency that facilitates differentiating the two solutions. Furthermore, a polydimethylsiloxane mold was fabricated to concentrate the solution at a region where the electric field of the resonator was strongest, and a laminating film was used to prevent contact between the solution and resonator. Experiments were performed by dropping $50{\mu}L$ of the solution in steps of 100 mg/dL up to a maximum human blood glucose level of 400 mg/dL. Our experiments confirmed that the transmission coefficients ($S_{21}$) of glucose solution and sodium chloride solution exhibit variations of -0.06 dB and 0.14 dB, respectively, per 100 mg/dL concentration change at the resonance frequency. Thus, the opposite trends in the variation of $S_{21}$ with change in the concentration of the two solutions can be used to distinguish between them.

Machine-Learning Based Biomedical Term Recognition (기계학습에 기반한 생의학분야 전문용어의 자동인식)

  • Oh Jong-Hoon;Choi Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.8
    • /
    • pp.718-729
    • /
    • 2006
  • There has been increasing interest in automatic term recognition (ATR), which recognizes technical terms for given domain specific texts. ATR is composed of 'term extraction', which extracts candidates of technical terms and 'term selection' which decides whether terms in a term list derived from 'term extraction' are technical terms or not. 'term selection' is a process to rank a term list depending on features of technical term and to find the boundary between technical term and general term. The previous works just use statistical features of terms for 'term selection'. However, there are limitations on effectively selecting technical terms among a term list using the statistical feature. The objective of this paper is to find effective features for 'term selection' by considering various aspects of technical terms. In order to solve the ranking problem, we derive various features of technical terms and combine the features using machine-learning algorithms. For solving the boundary finding problem, we define it as a binary classification problem which classifies a term in a term list into technical term and general term. Experiments show that our method records 78-86% precision and 87%-90% recall in boundary finding, and 89%-92% 11-point precision in ranking. Moreover, our method shows higher performance than the previous work's about 26% in maximum.

Cure Behaviors and Fracture Toughness of PEl/Difunctional Epoxy Blends (PEI/DGEBA 블랜드계의 열적특성 및 파괴인성)

  • Park, Soo-Jin;Jin, Sung-Yeol;Kaang, Shinyoung
    • Journal of Adhesion and Interface
    • /
    • v.4 no.3
    • /
    • pp.33-40
    • /
    • 2003
  • In this work, diglycidyl ether of bisphenol A (DGEBA)/polyetherimide (PEI) blends were cured using 4,4-diaminodiphenyl methane (DDM). And the effects of addition of different PEI contents to neat DGEBA were investigated in the thermal properties and fracture toughness of the blends. The contents of contents of containing PEI were varied in 0, 2.5, 5, 7.5, and 10 phr. The cure activation energies ($E_a$) of the cured specimens were determined by Kissinger equation and the mechanical interfacial properties of the specimens were performed by critical stress intensity factor ($K_{IC}$). Also their surfaces were examined by using a scanning electron microscope (SEM) and the surface energetics of blends was determined by contact angles. As a result, $E_a$ and $K_{IC}$ showed maximum values in the 7.5 phr PEI. This result was interpreted in the increment of the network structure of DGEBA/PEI blends. Also, the surface energetics of the DGEBA/PEI blends showed a similar behavior with the results of $K_{IC}$. This was probably due to the improving of specific or polor component of the surface free energy of DGEBA/PEI blends, resulting in increasing the hydrogen bonding of the hydroxyl and imide groups of the blends.

  • PDF

A Study on Surface Properties of Mechanical Interfacial Behavior of DGEBA/PMR-15 Blends (DGEBA/PMR-15 블렌드계의 표면특성 변화가 기계적 계면특성에 미지는 영향)

  • Park, Soo-Jin;Lee, Hwa-Young;Han, Mijeong;Hong, Sung-Kwon
    • Journal of Adhesion and Interface
    • /
    • v.4 no.1
    • /
    • pp.1-8
    • /
    • 2003
  • In this work, the effect of PMR-15 content on the variation of surface free energy of the DGEBA/PMR-15 blend system was investigated in terms of contact angles and mechanical interfacial tests. Based on FT-IR result of the blend system. C=O (1,772, $1,778cm^{-1}$) and C-N ($1,372cm^{-1}$) peaks appeared with imidization of PMR-15 and -OH ($3,500cm^{-1}$) peak showed broadly at 10 phr of PMR-15 by ring-opening of epoxy. Contact angle measurements were performed by using deionized water and diiodomethane as testing liquids. As a result, the surface free energy of the blends gave a maximum value at 10 phr of PMR-15, due to the significant increasing of specific component. The mechanical interfacial properties measured from the critical stress intensity factor ($K_{IC}$) and the critical strain energy release rate ($G_{IC}$) showed a similar behavior with the results of surface energetics. This behavior was probably attributed to The improving of the interfacial adhesion between intermolecules, resulting from increasing the hydrogen bondings of the blends.

  • PDF

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.