• Title/Summary/Keyword: 컴퓨터 환경

Search Result 8,159, Processing Time 0.04 seconds

Scheduling Algorithms and Queueing Response Time Analysis of the UNIX Operating System (UNIX 운영체제에서의 스케줄링 법칙과 큐잉응답 시간 분석)

  • Im, Jong-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.3
    • /
    • pp.367-379
    • /
    • 1994
  • This paper describes scheduling algorithms of the UNIX operating system and shows an analytical approach to approximate the average conditional response time for a process in the UNIX operating system. The average conditional response time is the average time between the submittal of a process requiring a certain amount of the CPU time and the completion of the process. The process scheduling algorithms in thr UNIX system are based on the priority service disciplines. That is, the behavior of a process is governed by the UNIX process schuduling algorithms that (ⅰ) the time-shared computer usage is obtained by allotting each request a quantum until it completes its required CPU time, (ⅱ) the nonpreemptive switching in system mode and the preemptive switching in user mode are applied to determine the quantum, (ⅲ) the first-come-first-serve discipline is applied within the same priority level, and (ⅳ) after completing an allotted quantum the process is placed at the end of either the runnable queue corresponding to its priority or the disk queue where it sleeps. These process scheduling algorithms create the round-robin effect in user mode. Using the round-robin effect and the preemptive switching, we approximate a process delay in user mode. Using the nonpreemptive switching, we approximate a process delay in system mode. We also consider a process delay due to the disk input and output operations. The average conditional response time is then obtained by approximating the total process delay. The results show an excellent response time for the processes requiring system time at the expense of the processes requiring user time.

  • PDF

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF

Feature Analysis of Metadata Schemas for Records Management and Archives from the Viewpoint of Records Lifecycle (기록 생애주기 관점에서 본 기록관리 메타데이터 표준의 특징 분석)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.10 no.2
    • /
    • pp.75-99
    • /
    • 2010
  • Digital resources are widely used in our modern society. However, we are facing fundamental problems to maintain and preserve digital resources over time. Several standard methods for preserving digital resources have been developed and are in use. It is widely recognized that metadata is one of the most important components for digital archiving and preservation. There are many metadata standards for archiving and preservation of digital resources, where each standard has its own feature in accordance with its primary application. This means that each schema has to be appropriately selected and tailored in accordance with a particular application. And, in some cases, those schemas are combined in a larger frame work and container metadata such as the DCMI application framework and METS. There are many metadata standards for archives of digital resources. We used the following metadata standards in this study for the feature analysis me metadata standards - AGLS Metadata which is defined to improve search of both digital resources and non-digital resources, ISAD(G) which is a commonly used standard for archives, EAD which is well used for digital archives, OAIS which defines a metadata framework for preserving digital objects, and PREMIS which is designed primarily for preservation of digital resources. In addition, we extracted attributes from the decision tree defined for digital preservation process by Digital Preservation Coalition (DPC) and compared the set of attributes with these metadata standards. This paper shows the features of these metadata standards obtained through the feature analysis based on the records lifecycle model. The features are shown in a single frame work which makes it easy to relate the tasks in the lifecycle to metadata elements of these standards. As a result of the detailed analysis of the metadata elements, we clarified the features of the standards from the viewpoint of relationships between the elements and the lifecycle stages. Mapping between metadata schemas is often required in the long-term preservation process because different schemes are used in the records lifecycle. Therefore, it is crucial to build a unified framework to enhance interoperability of these schemes. This study presents a basis for the interoperability of different metadata schemas used in digital archiving and preservation.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF

A Study on Improving Scheme and An Investigation into the Actual Condition about Components of Physical Distribution System (물류시스템 구성요인에 관한 실태분석과 개선방안에 관한 연구)

  • Kim, Kyeong-Cho
    • Journal of Distribution Science
    • /
    • v.7 no.4
    • /
    • pp.47-56
    • /
    • 2009
  • The purpose of this study is to present an alternative improving the efficient and reasonable of the physical distribution system management is influenced by many factors. Therefore, the study depends on the documentary method and survey method to achieve the purpose of this study. The major components of a physical distribution system are refers to as elements, include warehouse·storage system, transportation system, inventory system, physical distribution information system. The factors used in this study are ① factor of product(quality·A/S·added value of product·adaption of product·technical competitive power to other enterprises), ② factor of market(market channel·kinds of customer·physical distribution share), ③ factor of warehouse·storage(warehouse design·size·direction·storage ability·warehouse quality), ④ factor of transportation(promptness·reliability·responsibility·kinds of transportation·cooperation united transportation system·national transportation network), ⑤ factor of packaging (packaging design·material·educating program·pollution degree measure program), ⑥ factor of inventory(ordinary inventory criterion·consistence for inventories record), ⑦ factor of unloaded(unloaded machine·having machine ratio), ⑧ factor of information system (physical distribution quantity analysis·usable computer part), ⑨ factor of physical distribution cost(sales ratio to product) ⑩ factor of physical distribution system(physical distribution center etc). The implication of this study can be summarized as follows: ① In firms that have not adopted a systems integrative approach, physical distribution is a fragmented and often uncoordinated set of activities spread throughout various functions with function having its own set of priorities and measurements. ② The physical distribution is recognized as more an important strategic factor than a simple cost reduction factor, ③ It can be used a strategic competition tool to enterprise.

  • PDF

Simulation and Post-representation: a study of Algorithmic Art (시뮬라시옹과 포스트-재현 - 알고리즘 아트를 중심으로)

  • Lee, Soojin
    • 기호학연구
    • /
    • no.56
    • /
    • pp.45-70
    • /
    • 2018
  • Criticism of the postmodern philosophy of the system of representation, which has continued since the Renaissance, is based on a critique of the dichotomy that separates the subjects and objects and the environment from the human being. Interactivity, highlighted in a series of works emerging as postmodern trends in the 1960s, was transmitted to an interactive aspect of digital art in the late 1990s. The key feature of digital art is the possibility of infinite variations reflecting unpredictable changes based on public participation on the spot. In this process, the importance of computer programs is highlighted. Instead of using the existing program as it is, more and more artists are creating and programming their own algorithms or creating unique algorithms through collaborations with programmers. We live in an era of paradigm shift in which programming itself must be considered as a creative act. Simulation technology and VR technology draw attention as a technique to represent the meaning of reality. Simulation technology helps artists create experimental works. In fact, Baudrillard's concept of Simulation defines the other reality that has nothing to do with our reality, rather than a reality that is extremely representative of our reality. His book Simulacra and Simulation refers to the existence of a reality entirely different from the traditional concept of reality. His argument does not concern the problems of right and wrong. There is no metaphysical meaning. Applying the concept of simulation to algorithmic art, the artist models the complex attributes of reality in the digital system. And it aims to build and integrate internal laws that structure and activate the world (specific or individual), that is to say, simulate the world. If the images of the traditional order correspond to the reproduction of the real world, the synthesized images of algorithmic art and simulated space-time are the forms of art that facilitate the experience. The moment of seeing and listening to the work of Ian Cheng presented in this article is a moment of personal experience and the perception is made at that time. It is not a complete and closed process, but a continuous and changing process. It is this active and situational awareness that is required to the audience for the comprehension of post-representation's forms.

Exploring the 4th Industrial Revolution Technology from the Landscape Industry Perspective (조경산업 관점에서 4차 산업혁명 기술의 탐색)

  • Choi, Ja-Ho;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.2
    • /
    • pp.59-75
    • /
    • 2019
  • This study was carried out to explore the 4th Industrial Revolution technology from the perspective of the landscape industry to provide the basic data necessary to increase the virtuous circle value. The 4th Industrial Revolution, the characteristics of the landscape industry and urban regeneration were considered and the methodology was established and studied including the technical classification system suitable for systematic research, which was selected as a framework. First, the 4th Industrial Revolution technology based on digital data was selected, which could be utilized to increase the value of the virtuous circle for the landscape industry. From 'Element Technology Level', and 'Core Technology' such as the Internet of Things, Cloud Computing, Big Data, Artificial Intelligence, Robot, 'Peripheral Technology', Virtual or Augmented Reality, Drones, 3D 4D Printing, and 3D Scanning were highlighted as the 4th Industrial Revolution technology. It has been shown that it is possible to increase the value of the virtuous circle when applied at the 'Trend Level', in particular to the landscape industry. The 'System Level' was analyzed as a general-purpose technology, and based on the platform, the level of element technology(computers, and smart devices) was systematically interconnected, and illuminated with the 4th Industrial Revolution technology based on digital data. The application of the 'Trend Level' specific to the landscape industry has been shown to be an effective technology for increasing the virtuous circle values. It is possible to realize all synergistic effects and implementation of the proposed method at the trend level applying the element technology level. Smart gardens, smart parks, etc. have been analyzed to the level they should pursue. It was judged that Smart City, Smart Home, Smart Farm, and Precision Agriculture, Smart Tourism, and Smart Health Care could be highly linked through the collaboration among technologies in adjacent areas at the Trend Level. Additionally, various utilization measures of related technology applied at the Trend Level were highlighted in the process of urban regeneration, public service space creation, maintenance, and public service. In other words, with the realization of ubiquitous computing, Hyper-Connectivity, Hyper-Reality, Hyper-Intelligence, and Hyper-Convergence were proposed, reflecting the basic characteristics of digital technology in the landscape industry can be achieved. It was analyzed that the landscaping industry was effectively accommodating and coordinating with the needs of new characters, education and consulting, as well as existing tasks, even when participating in urban regeneration projects. In particular, it has been shown that the overall landscapig area is effective in increasing the virtuous circle value when it systems the related technology at the trend level by linking maintenance with strategic bridgehead. This is because the industrial structure is effective in distributing data and information produced from various channels. Subsequent research, such as demonstrating the fusion of the 4th Industrial Revolution technology based on the use of digital data in creation, maintenance, and service of actual landscape space is necessary.