• Title/Summary/Keyword: 한국컴퓨터

Search Result 35,675, Processing Time 0.057 seconds

A Study on Greenspace Planning Strategies for Thermal Comfort and Energy Savings (열쾌적성과 에너지절약을 위한 녹지계획 전략 연구)

  • Jo, Hyun-Kil;Ahn, Tae-Won
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.38 no.3
    • /
    • pp.23-32
    • /
    • 2010
  • The purpose of this study is to quantify human energy budgets for different structures of outdoor spatial surfaces affecting thermal comfort, to analyze the impacts of tree shading on building energy savings, and to suggest desirable strategies of urban greenspace planning concerned. Concrete paving and grass spaces without tree shading and compacted-sand spaces with tree shading were selected to reflect archetypal compositional types for outdoor spatial materials. The study then estimated human energy budgets in static activity for the 3 space types. Major determinants of energy budgets were the presence of shading and also the albedo and temperature of base surfaces. The energy budgets for concrete paving and grass spaces without tree shading were $284\;W/m^2$ and $226\;W/m^2$, respectively, and these space types were considerably poor in thermal comfort. Therefore, it is desirable to construct outdoor resting spaces with evapotranspirational shade trees and natural materials for the base plane. Building energy savings from tree shading for the case of Daegu in the southern region were quantified using computer modeling programs and compared with a previous study for Chuncheon in the middle region. Shade trees planted to the west of a building were most effective for annual savings of heating and cooling energy. Plantings of shade trees in the south should be avoided, because they increased heating energy use with cooling energy savings low in both climate regions. A large shade tree in the west and east saved cooling energy by 1~2% across building types and regions. Based on previous studies and these results, some strategies including indicators for urban greenspace planning were suggested to improve thermal comfort of outdoor spaces and to save energy use in indoor spaces. These included thermal comfort in construction materials for outdoor spaces, building energy savings through shading, evapotranspiration and windspeed mitigation by greenspaces, and greenspace areas and volume for air-temperature reductions. In addition, this study explored the application of the strategies to greenspace-related regulations to ensure their effectiveness.

Scheduling Algorithms and Queueing Response Time Analysis of the UNIX Operating System (UNIX 운영체제에서의 스케줄링 법칙과 큐잉응답 시간 분석)

  • Im, Jong-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.3
    • /
    • pp.367-379
    • /
    • 1994
  • This paper describes scheduling algorithms of the UNIX operating system and shows an analytical approach to approximate the average conditional response time for a process in the UNIX operating system. The average conditional response time is the average time between the submittal of a process requiring a certain amount of the CPU time and the completion of the process. The process scheduling algorithms in thr UNIX system are based on the priority service disciplines. That is, the behavior of a process is governed by the UNIX process schuduling algorithms that (ⅰ) the time-shared computer usage is obtained by allotting each request a quantum until it completes its required CPU time, (ⅱ) the nonpreemptive switching in system mode and the preemptive switching in user mode are applied to determine the quantum, (ⅲ) the first-come-first-serve discipline is applied within the same priority level, and (ⅳ) after completing an allotted quantum the process is placed at the end of either the runnable queue corresponding to its priority or the disk queue where it sleeps. These process scheduling algorithms create the round-robin effect in user mode. Using the round-robin effect and the preemptive switching, we approximate a process delay in user mode. Using the nonpreemptive switching, we approximate a process delay in system mode. We also consider a process delay due to the disk input and output operations. The average conditional response time is then obtained by approximating the total process delay. The results show an excellent response time for the processes requiring system time at the expense of the processes requiring user time.

  • PDF

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF

Dose Distribution and Design of Dynamic Wedge Filter for 3D Conformal Radiotherapy (방사선 입체조형치료를 위한 동적쐐기여과판의 고안과 조직내 선량분포 특성)

  • 추성실
    • Progress in Medical Physics
    • /
    • v.9 no.2
    • /
    • pp.77-88
    • /
    • 1998
  • Wedge shaped isodoses are desired in a number of clinical situations. Hard wedge filters have provided nominal angled isodoses with dosimetric consequences of beam hardening, increased peripheral dosing, nonidealized gradients at deep depths along with the practical consequendes of filter handling and placement problems. Dynamic wedging uses a combination of a moving collimator and changing monitor dose to achieve angled isodoses. The segmented treatment tables(STT) that monitor unit setting by every distance of moving collimator, was induced by numerical formular. The characteristics of dynamic wedge by STT compared with real dosimetry. Methods and Materials : The accelerator CLINAC 2100C/D at Yonsei Cancer Center has two photon energies (6MV and 10MV), currently with dynamic wedge angles of 15$^{\circ}$, 30$^{\circ}$, 45$^{\circ}$ and 60$^{\circ}$. The segmented treatment tables(STT) that drive the collimator in concert with a changing monitor unit are unique for field sizes ranging from 4.0cm to 20.0cm in 0.5cm steps. Transmission wedge factors were measured for each STT with an standard ion chamber. Isodose profiles, isodose curves, percentage depth dose for dynamic wedge filters were measured with film dosimetry. Dynamic wedge angle by STT was well coincident with film dosimetry. Percent depth doses were found to be closer to open field but more shallow than hard wedge filter. The wedge transmission factor were decreased by increased the wedge angle and more higher than hard wedge filters. Dynamic wedging probided more consistent gradients across the field compared with hard wedge filters. Dynamic wedging has practical and dosimetric advantages over hard filters for rapid setup and keeping from table collisions. Dynamic wedge filters are positive replacement for hard filters and introduction of dynamic conformal radiotherapy and intensity modulation radiotherapy in a future.

  • PDF

A Study of Various Filter Setups with FBP Reconstruction for Digital Breast Tomosynthesis (디지털 유방단층영상합성법의 FBP 알고리즘 적용을 위한 다양한 필터 조합에 대한 연구)

  • Lee, Haeng-Hwa;Kim, Ye-Seul;Lee, Youngjin;Choi, Sunghoon;Lee, Seungwan;Park, Hye-Suk;Kim, Hee-Joung;Choi, Jae-Gu;Choi, Young-Wook
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.271-280
    • /
    • 2014
  • Recently, digital breast tomosynthesis (DBT) has been investigated to overcome the limitation of conventional mammography for overlapping anatomical structures and high patient dose with cone-beam computed tomography (CBCT). However incomplete sampling due to limited angle leads to interference on the neighboring slices. Many studies have investigated to reduce artifacts such as interference. Moreover, appropriate filters for tomosynthesis have been researched to solve artifacts resulted from incomplete sampling. The primary purpose of this study is finding appropriate filter scheme with FBP reconstruction for DBT system to reduce artifacts. In this study, we investigated characteristics of various filter schemes with simulation and prototype digital breast tomosynthesis under same acquisition parameters and conditions. We evaluated artifacts and noise with profiles and COV (coefficinet of variation) to study characteristic of filter. As a result, the noise with parameter 0.25 of Spectral filter reduced by 10% in comparison to that with only Ramp-lak filter. Because unbalance of information reduced with decreasing B of Slice thickness filter, artifacts caused by incomplete sampling reduced. In conclusion, we confirmed basic characteristics of filter operations and improvement of image quality by appropriate filter scheme. The results of this study can be utilized as base in research and development of DBT system by providing information that is about noise and artifacts depend on various filter schemes.

Feature Analysis of Metadata Schemas for Records Management and Archives from the Viewpoint of Records Lifecycle (기록 생애주기 관점에서 본 기록관리 메타데이터 표준의 특징 분석)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.10 no.2
    • /
    • pp.75-99
    • /
    • 2010
  • Digital resources are widely used in our modern society. However, we are facing fundamental problems to maintain and preserve digital resources over time. Several standard methods for preserving digital resources have been developed and are in use. It is widely recognized that metadata is one of the most important components for digital archiving and preservation. There are many metadata standards for archiving and preservation of digital resources, where each standard has its own feature in accordance with its primary application. This means that each schema has to be appropriately selected and tailored in accordance with a particular application. And, in some cases, those schemas are combined in a larger frame work and container metadata such as the DCMI application framework and METS. There are many metadata standards for archives of digital resources. We used the following metadata standards in this study for the feature analysis me metadata standards - AGLS Metadata which is defined to improve search of both digital resources and non-digital resources, ISAD(G) which is a commonly used standard for archives, EAD which is well used for digital archives, OAIS which defines a metadata framework for preserving digital objects, and PREMIS which is designed primarily for preservation of digital resources. In addition, we extracted attributes from the decision tree defined for digital preservation process by Digital Preservation Coalition (DPC) and compared the set of attributes with these metadata standards. This paper shows the features of these metadata standards obtained through the feature analysis based on the records lifecycle model. The features are shown in a single frame work which makes it easy to relate the tasks in the lifecycle to metadata elements of these standards. As a result of the detailed analysis of the metadata elements, we clarified the features of the standards from the viewpoint of relationships between the elements and the lifecycle stages. Mapping between metadata schemas is often required in the long-term preservation process because different schemes are used in the records lifecycle. Therefore, it is crucial to build a unified framework to enhance interoperability of these schemes. This study presents a basis for the interoperability of different metadata schemas used in digital archiving and preservation.

Students' Perception of Scratch Program using High School Science Class (스크래치를 활용한 고등학교 과학 수업에 대한 학생 인식)

  • Noh, Hee Jin;Paik, Seoung Hye
    • Journal of The Korean Association For Science Education
    • /
    • v.35 no.1
    • /
    • pp.53-64
    • /
    • 2015
  • This research was performed of high school science classes. These science classes progressed by using Scratch, and surveyed students' perception after finishing each class. This research was conducted of male students who want to choose department of natural science in the next grade. Those classes are consisted of four classes. This study produced a journal, and contained expressions of their thinking and feeling based on experiences during attending classes and projects. Consequently, that journal was analyzed in view of understanding and perception of Scratch using science classes, and it was also included of utilizing Scratch program. Research shows following three conclusions. First, students preferred Scratch using class to general one. They attend more active with high interest, and they felt senses of accomplishment while they make output by themselves. Second, their studies passed through three stages. These are problem perception, problem solving, and producing. Problem solving stage is especially complicated and difficult stage to students. This stage is consisted of Scratch side and Science side. Scratch side has Design and applying process, and Science side has data gathering and analyzing. Students' comprehension of scientific knowledge is increased and is preserved long time through this stage. Last, students had a hard time using Scratch. Because, it is the first time to them to use that program. Therefore, we deemed that they needed to start this kind of experience at lower grade than they are now, such as middle school stage. It is expected that this type of classes are getting more expanded and more populated as a part of students' core ability.

On-Line Determination Steady State in Simulation Output (시뮬레이션 출력의 안정상태 온라인 결정에 관한 연구)

  • 이영해;정창식;경규형
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1996.05a
    • /
    • pp.1-3
    • /
    • 1996
  • 시뮬레이션 기법을 이용한 시스템의 분석에 있어서 실험의 자동화는 현재 많은 연구와 개발이 진행 중인 분야이다. 컴퓨터와 정보통신 시스템에 대한 시뮬레이션의 예를 들어 보면, 수많은 모델을 대한 시뮬레이션을 수행할 경우 자동화된 실험의 제어가 요구되고 있다. 시뮬레이션 수행회수, 수행길이, 데이터 수집방법 등과 관련하여 시뮬레이션 실험방법이 자동화가 되지 않으면, 시뮬레이션 실험에 필요한 시간과 인적 자원이 상당히 커지게 되며 출력데이터에 대한 분석에 있어서도 어려움이 따르게 된다. 시뮬레이션 실험방법을 자동화하면서 효율적인 시뮬레이션 출력분석을 위해서는 시뮬레이션을 수행하는 경우에 항상 발생하는 초기편의 (initial bias)를 제거하는 문제가 선결되어야 한다. 시뮬레이션 출력분석에 사용되는 데이터들이 초기편의를 반영하지 않는 안정상태에서 수집된 것이어야만 실제 시스템에 대한 올바른 해석이 가능하다. 실제로 시뮬레이션 출력분석과 관련하여 가장 중요하면서도 어려운 문제는 시뮬레이션의 출력데이터가 이루는 추계적 과정 (stochastic process)의 안정상태 평균과 이 평균에 대한 신뢰구간(confidence interval: c. i.)을 구하는 것이다. 한 신뢰구간에 포함되어 있는 정보는 의사결정자에게 얼마나 정확하게 평균을 추정할 구 있는지 알려 준다. 그러나, 신뢰구간을 구성하는 일은 하나의 시뮬레이션으로부터 얻어진 출력데이터가 일반적으로 비정체상태(nonstationary)이고 자동상관(autocorrelated)되어 있기 때문에, 전통적인 통계적인 기법을 직접적으로 이용할 수 없다. 이러한 문제를 해결하기 위해 시뮬레이션 출력데이터 분석기법이 사용된다.본 논문에서는 초기편의를 제거하기 위해서 필요한 출력데이터의 제거시점을 찾는 새로운 기법으로, 유클리드 거리(Euclidean distance: ED)를 이용한 방법과 현재 패턴 분류(pattern classification) 문제에 널리 사용 중인 역전파 신경망(backpropagation neural networks: BNN) 알고리듬을 이용하는 방법을 제시한다. 이 기법들은 대다수의 기존의 기법과는 달리 시험수행(pilot run)이 필요 없으며, 시뮬레이션의 단일수행(single run) 중에 제거시점을 결정할 수 있다. 제거시점과 관련된 기존 연구는 다음과 같다. 콘웨이방법은 현재의 데이터가 이후 데이터의 최대값이나 최소값이 아니면 이 데이터를 제거시점으로 결정하는데, 알고기듬 구조상 온라인으로 제거시점 결정이 불가능하다. 콘웨이방법이 알고리듬의 성격상 온라인이 불가능한 반면, 수정콘웨이방법 (Modified Conway Rule: MCR)은 현재의 데이터가 이전 데이터와 비교했을 때 최대값이나 최소값이 아닌 경우 현재의 데이터를 제거시점으로 결정하기 때문에 온라인이 가능하다. 평균교차방법(Crossings-of-the-Mean Rule: CMR)은 누적평균을 이용하면서 이 평균을 중심으로 관측치가 위에서 아래로, 또는 아래서 위로 교차하는 회수로 결정한다. 이 기법을 사용하려면 교차회수를 결정해야 하는데, 일반적으로 결정된 교차회수가 시스템에 상관없이 일반적으로 적용가능하지 않다는 문제점이 있다. 누적평균방법(Cumulative-Mean Rule: CMR2)은 여러 번의 시험수행을 통해서 얻어진 출력데이터에 대한 총누적평균(grand cumulative mean)을 그래프로 그린 다음, 안정상태인 점을 육안으로 결정한다. 이 방법은 여러 번의 시뮬레이션을 수행에서 얻어진 데이터들의 평균들에 대한 누적평균을 사용하기 매문에 온라인 제거시점 결정이 불가능하며, 작업자가 그래프를 보고 임의로 결정해야 하는 단점이 있다. Welch방법(Welch's Method: WM)은 브라운 브리지(Brownian bridge) 통계량()을 사용하는데, n이 무한에 가까워질 때, 이 브라운 브리지 분포(Brownian bridge distribution)에 수렴하는 성질을 이용한다. 시뮬레이션 출력데이터를 가지고 배치를 구성한 후 하나의 배치를 표본으로 사용한다. 이 기법은 알고리듬이 복잡하고, 값을 추정해야 하는 단점이 있다. Law-Kelton방법(Law-Kelton's Method: LKM)은 회귀 (regression)이론에 기초하는데, 시뮬레이션이 종료된 후 누적평균데이터에 대해서 회귀직선을 적합(fitting)시킨다. 회귀직선의 기울기가 0이라는 귀무가설이 채택되면 그 시점을 제거시점으로 결정한다. 일단 시뮬레이션이 종료된 다음, 데이터가 모아진 순서의 반대 순서로 데이터를 이용하기 때문에 온라인이 불가능하다. Welch절차(Welch's Procedure: WP)는 5회이상의 시뮬레이션수행을 통해 수집한 데이터의 이동평균을 이용해서 시각적으로 제거시점을 결정해야 하며, 반복제거방법을 사용해야 하기 때문에 온라인 제거시점의 결정이 불가능하다. 또한, 한번에 이동할 데이터의 크기(window size)를 결정해야 한다. 지금까지 알아 본 것처럼, 기존의 방법들은 시뮬레이션의 단일 수행 중의 온라인 제거시점 결정의 관점에서는 미약한 면이 있다. 또한, 현재의 시뮬레이션 상용소프트웨어는 작업자로 하여금 제거시점을 임의로 결정하도록 하기 때문에, 실험중인 시스템에 대해서 정확하고도 정량적으로 제거시점을 결정할 수 없게 되어 있다. 사용자가 임의로 제거시점을 결정하게 되면, 초기편의 문제를 효과적으로 해결하기 어려울 뿐만 아니라, 필요 이상으로 너무 많은 양을 제거하거나 초기편의를 해결하지 못할 만큼 너무 적은 양을 제거할 가능성이 커지게 된다. 또한, 기존의 방법들의 대부분은 제거시점을 찾기 위해서 시험수행이 필요하다. 즉, 안정상태 시점만을 찾기 위한 시뮬레이션 수행이 필요하며, 이렇게 사용된 시뮬레이션은 출력분석에 사용되지 않기 때문에 시간적인 손실이 크게 된다.

  • PDF

A Study on the Shaped-Beam Antenna with High Gain Characteristic (고이득 특성을 갖는 성형 빔 안테나에 대한 연구)

  • Eom, Soon-Young;Yun, Je-Hoon;Jeon, Soon-Ick;Kim, Chang-Joo
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.1 s.116
    • /
    • pp.62-75
    • /
    • 2007
  • This paper describes a shaped-beam antenna for increasing the antenna gain of a radiating element. The proposed antenna structure is composed of an exciting element and a multi-layered disk array structure(MDAS). The stack micro-strip patch elements were used as the exciter for effectively radiating the electromagnetic power to the MDAS over the broadband, and finite metallic disk array elements - which give the role of a director for shaping the antenna beam with the high gain - were finitely and periodically layered onto it. The efficient power coupling between the exciter and the MDAS should be carried out in such a way that the proposed antenna has a high gain characteristic. The design parameters of the exciter and the MDAS should be optimized together to meet the required specifications to meet the required specifications. In this study, a shaped-beam antenna with high gain was optimally designed under the operating conditions with a linear polarization and the frequency band of $9.6{\sim}10.4\;GHz$. Two methods constructed using thin dielectric film and dielectric foam materials respectively were also proposed in order to implement the MBAS of the antenna. In particular, through the computer simulation process, the electrical performance variations of the antenna with the MDAS realized by the thin dielectric film materials were shown according to the number of disk array elements in the stack layer. Two kinds of antenna breadboard with the MDAS realized with the thin dielectric film and dielectric foam materials were fabricated, but experimentation was conducted only on the antenna breadboard(Type 1) with the MDAS realized with the thin dielectric film materials according to the number of disk array elements in the stack layer in order to compare it with the electrical performance variations obtained during the simulation. The measured antenna gain performance was found to be in good agreement with the simulated one, and showed the periodicity of the antenna gain variations according to the stack layer number of the disk array elements. The electrical performance of the Type 1 antenna was measured at the center frequency of 10 GHz. As the disk away elements became the ten stacks, a maximum antenna gain of 15.65 dBi was obtained, and the measured return loss was not less than 11.4 dB within the operating band. Therefore, a 5 dB gain improvement of the Type 1 antenna can be obtained by the MDAS that is excited by the stack microstrip patch elements. As the disk array elements became the twelve stacks, the antenna gain of the Type 1 was measured to be 1.35 dB more than the antenna gain of the Type 2 by the outer dielectric ring effect, and the 3 dB beam widths measured from the two antenna breadboards were about $28^{\circ}$ and $36^{\circ}$ respectively.