The Analysis of a Cerrobend Compensator and a Electronic Compensator Designed by a Radiation Treatment Planning System (방사선치료계획장치로 설계된 Cerrobend 선량보상체와 전자 선량보상체의 제작 및 특성 분석)
-
- Progress in Medical Physics
- /
- v.16 no.2
- /
- pp.82-88
- /
- 2005
In this study, the physical compensator made with the high density material, Cerrobend, and the electronic compensator realized by the movement of a dynamic multileaf collimator were analyzed in order to verify the properness of a design function in the commercial RTP (radiation treatment planning) system, Eclipse. The CT images of a phantom composed of the regions of five different thickness were acquired and the proper compensator which can make homogeneous dose distribution at the reference depth was designed in the RTP. The frame for the casting of Cerrobend compensator was made with a computerized automatic styrofoam cutting device and the Millennium MLC-120 was used for the electronic compensator. All the dose values and isodose distributions were measured with a radiographic EDR2 film. The deviation of a dose distribution was
During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula
Paper packs, glass bottles, metal cans, and plastic materials are classified according to packaging material recycling groups that are Extended Producer Responsibility (EPR). In the case of waste paper pack, the compressed cartons are dissociated to separate polyethylene films and other foreign substance, and then these are washed, pulverized and dried to produce toilet paper. Glass bottle for recycling is provided to the bottle manufacturers after the process of collecting the waste glass bottle, removing the foreign substance, sorting by color, crushing, raw materializing process. Waste glass recycling technology of Korea is largely manual, except for removal of metal components and low specific gravity materials. Metal can is classified into iron and aluminum cans through an automatic sorting machine, compressed, and reproduced as iron and aluminum through a blast furnace. In the case of composite plastic material, the selected compressed product is crushed and then recycled through melt molding and refined products are produced through solid fuel manufacturing steps through emulsification and compression molding through pyrolysis. In the recycling process of paper packs, glass bottles, metal cans, and plastic materials, the influx of recycled materials and other substances interferes with the recycling process and increases the recycling cost and time. Therefore, the government needs to improve the legal system which is necessary to use materials and structure that are easy to recycle from the design stage of products or packaging materials.
We study the internal structure under the artificial mountain of Heumkyeonggak-nu, a Korean water-powered clock in the early Joseon dynasty. All the puppets on the artificial mountain are driven by the rotational force generated by the water wheel at their designated time. We design a model that work with three parts of the artificial mountain. At the upper part of the artificial mountain to the east, west, north and south, there are four puppets called the Four Mystical Animal Divinity and four ladies called the Jade Lady respectively. The former rotates a quarter every double hour and the latter rings the bell every hour. In the middle part of this mountain is the timekeeping platform with four puppets; the Timekeeping Official (Hour Jack), the Bell-, Drum-, and Gong-Warriors. The Hour Jack controls time with three warriors each hitting his own bell, drum, and gong, respectively. In the plain there are 12 Jade Lady puppets (the lower ladies) combined with 12 Oriental Animal Deity puppets. In his own time a lady doll pops out of the hole and her animal doll gets up. Two hours later, the animal deity lies down and his lady hides in the artificial plain. These puppets are regularly moved by the signal such as iron balls, bumps, levers, and so on. We can use balls and bumps to explain the concept of the Jujeon system. Iron balls were used to manipulate puppets of the timekeeping mechanism in Borugak-nu, another Korean water-powered clock in Joseon dynasty, which was developed earlier than Heumgyeonggak-nu. According to the North Korea's previous study (Choi, 1974), it is obvious that bumps were used in the internal structure of Heumgyeonggak-nu. In 1669, The armillary clock made by Song, I-young was also utilized bumps. Finally we presented mock-ups of three timekeeping systems.
The earliest safety data programs, the FDR and CVR, were electronic reporting systems that generate data "automatically." The FDR program, originally instituted in 1958, had no publicly available restrictions for protections against sanctions by the FAA or an airline, although there are agreements and union contracts forbidding the use of FDR data for FAA enforcement actions. This FDR program still has the least formalized protections. With the advent of the CVR program in 1966, the precursor to the current FAR 91.25 was already in place, having been promulgated in 1964. It stated that the FAA would not use CVR data for enforcement actions. In 1982, Congress began restricting the disclosure of the CVR tape and transcripts. Congress added further clarification of the availability of discovery in civil litigation in 1994. Thus, the CVR data have more definitive protections in place than do FDR data. The ASRS was the first non-automatic reporting system; and built into its original design in 1975 was a promise of limited protection from enforcement sanctions. That promise was further codified in an FAR in 1979. As with the CVR, from its inception, the ASRS had some protections built in for the person who might have had a safety problem. However, the program did not (and to this day does not) explicitly deal with issues of use by airlines, litigants, or the public media, although it appears that airlines will either take a non-punitive stance if an ASRS report is filed, or the airline may ignore the fact that it has been filed at all. The FAA worked with several U.S. airlines in the early 1990s on developing ASAP programs, and the FAA issued an Advisory Circular about the program in 1997. From its inception, the ASAP program contained some FAA enforcement protections and company discipline protections, although some protection against litigation disclosure and public disclosure was not added until 2003, when FAA Order 8000.82 was promulgated, placing the program under the protections of FAR 193, which had been added in 2001. The FOQA program, when it was first instituted through a demonstration program in 1995, did not contain protections against sanctions. Now, however, the FAA cannot take enforcement action based on FOQA safety data, and an airline is limited to "corrective action" under the program. Union contracts can exclude FOQA from the realm of disciplinary action, although airline practice may be for airlines to require retraining if there is no contract in place forbidding it. The data is protected against disclosure for litigation and public media purposes by FAA Order 8000.81, issued in 2003, which placed FOQA under the protections of FAR 193. The figure on the next page shows when each program began, and when each statute, regulation, or order became effective for that program.
Today, photography is facing to the crisis of identity and dilemma of ontology from the digital imaging process in the new technology form. It is very important points to say rethinking of the traditional photographic medium, that has changed the way we view the world and ourselves is perhaps an understatement and that photography has transformed our essential understanding of reality. Now, no longer are photographic images regarded as the true automatic recording, innocent evidence and the mirror to the reality. Rather, photography constructs the world for our entertainment, helping to create the comforting illusions by which we live. The recognition that photographs are not constructions and reflections of reality, is the basis for the actual presence within the contemporary photographic world. It is shock. This thesis's aim is to look for the problems of photographic identity and ontological crisis that is controlling and regulating digital photographic imagery, allowing the reproduction of the electronic simulations era. Photography loses its special aesthetic status and becomes no more true information and, exclusively evidence by traditional film and paper that appeared both as a technological accuracy and as a medium-specific aesthetic. The result, photography is facing two crises, one is the photographic ontology(the introduction of computerized digital images) and the other is photographic epistemology(having to do broader changes in ethics, knowledge and culture). Taken together, these crises apparently threaten us with the death of photography, with the 'end' of photography and the culture it sustains. The thesis's meaning is to look into the dilemma of photography's ontology and epistemology, especially, automatical index and digital codes from its origin, meaning, and identity as the technological medium. Thus, in particular, thesis focuses on the analog imagery presence, from the nature in the material world, and the digital imagery presence from the cultural situations in our society. And also thesis's aim is to examine the main issues of the history of photography has been concentrated on the ontological arguments since the discovery of photography in 1839. Photography has never been only one static technology form. Rather, its nearly two centuries of technological development have been marked by numerous, competing of technological innovation and self revolution from the dual aspects. This thesis examines recent account of photography by the analysis of the medium's concept, meaning, identity between film base image and digital base image from the aspects of photographic ontology and epistemology. Thus, the structure of thesis is fairy straightforward to examine what appear to be two opposing view of photographic conditions and ontological situations. Thesis' view contrasts that figure out the value of photography according to its fundamental characteristic as a medium. Also, it seeks a possible solution to the dilemma of photographic ontology through the medium's origin from the early years of the nineteenth century to the raising questions about the different meaning(analog/digital) of photography, now. Finally, this thesis emphasizes and concludes that the photographic ontological crisis reflects to the paradoxical dynamic structure, that unsolved the origins of the medium, itself. Moreover, even photography is not single identity of the photographic ontology, and also can not be understood as having a static identity or singular status from the dynamic field of technologies, practices, and images.
The failure of early economic sanctions aimed at hurting the overall economies of targeted states called for a more sophisticated design of economic sanctions. This paved way for the advent of 'smart sanctions,' which target the supporters of the regime instead of the public mass. Despite controversies over the effectiveness of economic sanctions as a coercive tool to change the behavior of a targeted state, the transformation from 'comprehensive sanctions' to 'smart sanctions' is gaining the status of a legitimate method to impose punishment on states that do not conform to international norms, the nonproliferation of weapons of mass destruction in this particular context of the paper. The five permanent members of the United Nations Security Council proved that it can come to an accord on imposing economic sanctions over adopting resolutions on waging military war with targeted states. The North Korean nuclear issue has been the biggest security threat to countries in the region, even for China out of fear that further developments of nuclear weapons in North Korea might lead to a 'domino-effect,' leading to nuclear proliferation in the Northeast Asia region. Economic sanctions had been adopted by the UNSC as early as 2006 after the first North Korean nuclear test and has continually strengthened sanctions measures at each stage of North Korean weapons development. While dubious of the effectiveness of early sanctions on North Korea, recent sanctions that limit North Korea's exports of coal and imports of oil seem to have an impact on the regime, inducing Kim Jong-un to commit to peaceful talks since 2018. The purpose of this paper is to add a variable to the factors determining the success of economic sanctions on North Korea: preventing North Korea's evasion efforts by conducting illegal transshipments at sea. I first analyze the cause of recent success in the economic sanctions that led Kim Jong-un to engage in talks and add the maritime element to the argument. There are three conditions for the success of the sanctions regime, and they are: (1) smart sanctions, targeting commodities and support groups (elites) vital to regime survival., (2) China's faithful participation in the sanctions regime, and finally, (3) preventing North Korea's maritime evasion efforts.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.