• Title/Summary/Keyword: Code Library

Search Result 319, Processing Time 0.027 seconds

Construction of voxel head phantom and application to BNCT dose calculation (Voxel 머리팬텀 제작 및 붕소중성자포획요법 선량계산에의 응용)

  • Lee, Choon-Sik;Lee, Choon-Ik;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.26 no.2
    • /
    • pp.93-99
    • /
    • 2001
  • Voxel head phantom for overcoming the limitation of mathematical phantom in depleting anatomical details was constructed and example dose calculation for BNCT was performed. The repeated structure algorithm of the general purpose Monte Carlo code, MCNP4B was applied for yokel Monte Carlo calculation. Simple binary yokel phantom and combinatorial geometry phantom composed of two materials were constructed for validating the voxel Monte Carlo calculation system. The tomographic images of VHP man provided by NLM(National Library of Medicine) were segmented and indexed to construct yokel head phantom. Comparison of doses for broad parallel gamma and neutron beams in AP and PA directions showed decrease of brain dose due to the attenuation of neutron in eye balls in case of yokel head phantom. The spherical tumor volume with diameter, 5cm was defined in the center of brain for BNCT dose calculation in which accurate 3 dimensional dose calculation is essential. As a result of BNCT dose calculation for downward neutron beam of 10keV and 40keV, the tumor dose is about doubled when boron concentration ratio between the tumor to the normal tissue is $30{\mu}g/g$ to $3{\mu}g/g$. This study established the voxel Monte Carlo calculation system and suggested the feasibility of precise dose calculation in therapeutic radiology.

  • PDF

A Memory-efficient Partially Parallel LDPC Decoder for CMMB Standard (메모리 사용을 최적화한 부분 병렬화 구조의 CMMB 표준 지원 LDPC 복호기 설계)

  • Park, Joo-Yul;Lee, So-Jin;Chung, Ki-Seok;Cho, Seong-Min;Ha, Jin-Seok;Song, Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.1
    • /
    • pp.22-30
    • /
    • 2011
  • In this paper, we propose a memory efficient multi-rate Low Density Parity Check (LDPC) decoder for China Mobile Multimedia Broadcasting (CMMB). We find the best trade-off between the performance and the circuit area by designing a partially parallel decoder which is capable of passing multiple messages in parallel. By designing an efficient address generation unit (AGU) with an index matrix, we could reduce both the amount of memory requirement and the complexity of computation. The proposed regular LDPC decoder was designed in Verilog HDL and was synthesized by Synopsys' Design Compiler using Chartered $0.18{\mu}m$ CMOS cell library. The synthesized design has the gate size of 455K (in NAND2). For the two code rates supported by CMMB, the rate-1/2 decoder has a throughput of 14.32 Mbps, and the rate-3/4 decoder has a throughput of 26.97 Mbps. Compared with a conventional LDPC for CMMB, our proposed design requires only 0.39% of the memory.

A Study on the Medium Designator In Non-book Materials (비도서자료의 매체표시에 관한 연구)

  • Nam Tae Woo
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.119-140
    • /
    • 1988
  • This paper is the study on the Medium Designator in Non-book materials. Main contents of this study were as follows; 1. The medium designator serves to indicate the class of material to which an item belongs. This is used to give an 'early warning' ;to the catalogue user. 2. This medium designator may be further divided into two elements ; a general material designation (GMD), for example video-recording, and a specific material designation (SMD), for example, videodisc. 3. GMD: In cataloging, a term indicating the broad class of material to which a bibliographic item belongs, such as 'motion picture', and SMD : In descriptive cataloging, a term indicating the special class of material (usually the class of physical object) to which a biblographic item belongs, such as videocassette. 4. Locating the medium designator after the title proper was not prescribed until ISBD(G) and AACR2. In pre-ISBD(G) codes, the ,early, warning type of medium designator was placed after all title information. But in AACR2, the medium designator is placed after the title proper, but before parellel title and other title information. 5. In Terminology, Two separate lists of designations are given in AACR2, l.1C1, one for British and one for North American use. The British list contains fewer terms, and uses generic categories to group together some of the North American list. 6. The problem of where to place the medium designator might be circumvented by using some kind of early alerting device other than a formal element of biblliographic description. Various alternatives have been suggested. A more popular device is the provision of symbols or 'media code' which are part of the call number and indicate the porticular medium type. 'Colour-coding' the use of used by some libraries but is now longly discouraged. 7. According to Frost. The medium designatorhas been generally reeognized as serving three functions; 1) as a statement of the nature or basic format of the item cataloged and thus as a meant of informing the user as to the type of material at hand; 2) as a description of the physical charaetistics of medium and as a means of alerting the user to equipment needed to make use of the item. 3) as a device to distinguish different physical formats which share the same title. 8. AACR2 raises some problems which decision makers have neet had to face preriously It provides a GMD for every item in the collection including books and it makes the application of any or all GMD's optional.

  • PDF

VLSI Design of DWT-based Image Processor for Real-Time Image Compression and Reconstruction System (실시간 영상압축과 복원시스템을 위한 DWT기반의 영상처리 프로세서의 VLSI 설계)

  • Seo, Young-Ho;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.102-110
    • /
    • 2004
  • In this paper, we propose a VLSI structure of real-time image compression and reconstruction processor using 2-D discrete wavelet transform and implement into a hardware which use minimal hardware resource using ASIC library. In the implemented hardware, Data path part consists of the DWT kernel for the wavelet transform and inverse transform, quantizer/dequantizer, the huffman encoder/huffman decoder, the adder/buffer for the inverse wavelet transform, and the interface modules for input/output. Control part consists of the programming register, the controller which decodes the instructions and generates the control signals, and the status register for indicating the internal state into the external of circuit. According to the programming condition, the designed circuit has the various selective output formats which are wavelet coefficient, quantization coefficient or index, and Huffman code in image compression mode, and Huffman decoding result, reconstructed quantization coefficient, and reconstructed wavelet coefficient in image reconstructed mode. The programming register has 16 stages and one instruction can be used for a horizontal(or vertical) filtering in a level. Since each register automatically operated in the right order, 4-level discrete wavelet transform can be executed by a programming. We synthesized the designed circuit with synthesis library of Hynix 0.35um CMOS fabrication using the synthesis tool, Synopsys and extracted the gate-level netlist. From the netlist, timing information was extracted using Vela tool. We executed the timing simulation with the extracted netlist and timing information using NC-Verilog tool. Also PNR and layout process was executed using Apollo tool. The Implemented hardware has about 50,000 gate sizes and stably operates in 80MHz clock frequency.

Organ Dose Conversion Coefficients Calculated for Korean Pediatric and Adult Voxel Phantoms Exposed to External Photon Fields

  • Lee, Choonsik;Yeom, Yeon Soo;Griffin, Keith;Lee, Choonik;Lee, Ae-Kyoung;Choi, Hyung-do
    • Journal of Radiation Protection and Research
    • /
    • v.45 no.2
    • /
    • pp.69-75
    • /
    • 2020
  • Background: Dose conversion coefficients (DCCs) have been commonly used to estimate radiation-dose absorption by human organs based on physical measurements of fluence or kerma. The International Commission on Radiological Protection (ICRP) has reported a library of DCCs, but few studies have been conducted on their applicability to non-Caucasian populations. In the present study, we collected a total of 8 Korean pediatric and adult voxel phantoms to calculate the organ DCCs for idealized external photon-irradiation geometries. Materials and Methods: We adopted one pediatric female phantom (ETRI Child), two adult female phantoms (KORWOMAN and HDRK Female), and five adult male phantoms (KORMAN, ETRI Man, KTMAN1, KTMAN2, and HDRK Man). A general-purpose Monte Carlo radiation transport code, MCNPX2.7 (Monte Carlo N-Particle Transport extended version 2.7), was employed to calculate the DCCs for 13 major radiosensitive organs in six irradiation geometries (anteroposterior, posteroanterior, right lateral, left lateral, rotational, and isotropic) and 33 photon energy bins (0.01-20 MeV). Results and Discussion: The DCCs for major radiosensitive organs (e.g., lungs and colon) in anteroposterior geometry agreed reasonably well across the 8 Korean phantoms, whereas those for deep-seated organs (e.g., gonads) varied significantly. The DCCs of the child phantom were greater than those of the adult phantoms. A comparison with the ICRP Publication 116 data showed reasonable agreements with the Korean phantom-based data. The variations in organ DCCs were well explained using the distribution of organ depths from the phantom surface. Conclusion: A library of dose conversion coefficients for major radiosensitive organs in a series of pediatric and adult Korean voxel phantoms was established and compared with the reference data from the ICRP. This comparison showed that our Korean phantom-based data agrees reasonably with the ICRP reference data.

An Investigation on Core Competencies of Data Curator (데이터 큐레이터의 핵심 직무 요건 고찰에 관한 연구)

  • Lee, You-Kyoung;Chung, EunKyung
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.26 no.3
    • /
    • pp.129-150
    • /
    • 2015
  • As the digital technologies and internet have advanced, data have centered in the process of meaningful scientific ramifications and policy making in a wide variety of fields. Data curator in charge of managing data plays a significant role in terms of improving the effectiveness and efficiency of data management and re-uses. The purpose of this study is to identify the core competencies for data curator. For achieving the purpose of this study, two sets of data were collected. First, a total of 255 job descriptions were collected from the web sites including ARL, Digital Curation Exchange, Code4lib, ASIS&T JobLine for the period of 2011-2014. Second, in-depth interviews with five data curators from four diverse organizations were collected. The two sets of data were analyzed into seven categories identified from the related studies. Findings of this study showed that core competencies for data curator were identified into four categories, communication skills, data management techniques, knowledge and strategies for data management, and instructions and service provisions for users. The implications of this study can be considered as integrated and professional curriculum developments for data curator with core competencies.

A Methodology for Translation of Operating System Calls in Legacy Real-time Software to Ada (Legacy 실시간 소프트웨어의 운영체제 호출을 Ada로 번역하기 위한 방법론)

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2874-2890
    • /
    • 1997
  • This paper describes a methodology for translation of concurrent software expressed in operating system (OS) calls to Ada. Concurrency is expressed in some legacy software by OS calls that perform concurrent process/task control. Examples considered in this paper are calls in programs in C to Unix and calls in programs in CMS-2 to the Executive Service Routines of ATES or SDEX-20 other software re/reverse engineering research has focused on translating the OS calls in a legacy software to calls to another OS. In this approach, the understanding of software has required knowledge of the underlying OS, which is usually very complicated and informally documented. The research in this paper has focused on translating the OS calls in a legacy software into the equivalent protocols using the Ada facilities. In translation to Ada, these calls are represented by Ada equivalent code that follow the scheme of a message-based kernel oriented architecture. To facilitate translation, it utilizes templates placed in library for data structures, tasks, procedures, and messages. This methodology is a new approach to modeling OS in Ada in software re/reverse engineering. There is no need of knowledge of the underlying OS for software understanding in this approach, since the dependency on the OS in the legacy software is removed. It is portable and interoperable on Ada run-time environments. This approach can handle the OS calls in different legacy software systems.

  • PDF

A Study on Analysis of Research Data Repository in Humanities and Social Sciences (re3data를 기반으로 한 인문사회 RDR 연구)

  • Cho, Jane;Park, Jong-Do
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.30 no.2
    • /
    • pp.69-87
    • /
    • 2019
  • As the discussions on sharing research data prevail by the chance of the inauguration of the International Open Data Charter, research support organizations in the United States, the United Kingdom, and Japan are encouraging researchers to deposit their findings in a credible repository. Humanities and social sciences field, in which research data sharing culture and storage infrastructure are immature compared to life science and natural science, also needs to establish and operate a reliable storage infrastructure to guarantee the continuous access and utilization of data. This study analyzed the overall operational status of 305 subject repositories registered in re3data for the humanities and social sciences and clustered them according to the operational level using 5 indicators. As a result, 70% of the population were identified as universal clusters, and 20% of the excellent cluster was found to have the largest number of linguistic fields and the German-operated. In addition, this study confirmed through correspondence analysis that there is a relation between the sub-theme fields of humanities and social sciences and the types of data to be archived. The history and art domians are related to images, and social studies are related to statistical data. Linguistics has also been analyzed to be related to audio, plain text, and code.

Improving amber suppression activity of an orthogonal pair of Saccharomyces cerevisiae tyrosyl-tRNA synthetase and a variant of E. coli initiator tRNA, fMam tRNACUA, for the efficient incorporation of unnatural amino acids (효율적인 비천연 아민노산 도입을 위한 효모균 타이로신-tRNA 합성효소와 대장균 시작 tRNA 변이체의 엠버써프레션 활성증가)

  • Tekalign, Eyob;Oh, Ju-Eon;Park, Jungchan
    • Korean Journal of Microbiology
    • /
    • v.54 no.4
    • /
    • pp.420-427
    • /
    • 2018
  • The orthogonal pair of Saccharomyces cerevisiae tyrosyl-tRNA synthetase (Sc YRS) and a variant of E. coli initiator tRNA, fMam $tRNA_{CUA}$ which recognizes the amber stop codon is an effective tool for site-specific incorporation of unnatural amino acids into the protein in E. coli. To evolve the amber suppression activity of the orthogonal pair, we generated a mutant library of Sc YRS by randomizing two amino acids at 320 and 321 which involve recognition of the first base of anticodon in fMam $tRNA_{CUA}$. Two positive clones are selected from the library screening with chloramphenicol resistance mediated by amber suppression. They showed growth resistance against high concentration of chloramphenicol and their $IC_{50}$ values were approximately 1.7~2.3 fold higher than the wild type YRS. In vivo amber suppression assay reveals that mutant YRS-3 (mYRS-3) clone containing amino acid substitutions of P320A and D321A showed 6.5-fold higher activity of amber suppression compared with the wild type. In addition, in vitro aminoacylation kinetics of mYRS-3 also showed approximately 7-fold higher activity than the wild type, and the enhancement was mainly due to the increase of tRNA binding affinity. These results demonstrate that optimization of anticodon recognition by engineered aminoacyl tRNA synthetase improves the efficiency of unnatural amino acid incorporation in response to nonsense codon.

Cataloging Trends after LRM and its Acceptance in KORMARC Bibliographic Format (LRM 이후 목록 동향과 KORMARC 통합서지용에서의 수용 방안)

  • Lee, Mihwa;Lee, Eun-Ju;Rho, Jee-Hyun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.33 no.1
    • /
    • pp.25-45
    • /
    • 2022
  • This study was to develop KORMARC-bibliographic format reflecting cataloging trends after LRM using literature review, analysis of MARC 21 discussion papers, and comparison of the fields in MARC 21 and KORMARC. The acceptance and consideration of fields and sub-fields that need to be revised in KORMARC are as follows. First, in terms of LRM / RDA, fields 381 or 387 for the representative expression, field 881 and the change and addition of its sub-fields for the manifestation statement, and data provenance code to ▾7 sub-field for date provenance may be considered. Second, in terms of Linked Data, ▾1 sub-field for RWO, and field 758 for related work identifier can be added. Third, for the data exchange of KORMARC and BIBFRAME, it should be developed in consideration of mapping with BIBFRAME classes and attributes in KORMARC. Fourth, additional fields such as 251 version information, 334 mode of issuance, 335 expansion plan, 341 accessibility content, 348 format of notated music, 353 supplementary content characteristics, 532 accessibility note, 370 associated place, 385 audience characteristics, 386 creator/contributor characteristics, 388 time period of creation, 688 subject added entry-type of entity unspecified, 884 description conversion information, 885 matching information could be developed. This study will be used to revise KORMARC-bibliographic format and to build and utilize bibliographic data in domestic libraries.