• Title/Summary/Keyword: 키관리

Search Result 1,064, Processing Time 0.022 seconds

Methods of Incorporating Design for Production Considerations into Concept Design Investigations (개념설계 단계에서 총 건조비를 최소로 하는 생산지향적 설계 적용 방법)

  • H.S.,Bong
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.27 no.3
    • /
    • pp.131-136
    • /
    • 1990
  • 여러해 전부터 선박의 생산실적이나 생산성 관련 자료를 기록하고 보완하는 작업을 꾸준히 개선토록 노력해온 결과중 중요한 것 하나는, 선박의 여러 가지 설계 검토과정에서 충분히 활용할 수 있는 함축성 있고 믿을만한 형태의 생산정보를 제공해줄 수 있게 되었다는 것이라고 말 할 수 있겠다. 이러한 자료들은 생산계획상 각 단계(stage)에서의 작업량, 예상재료비와 인건비의 산출등이 포함될 수 있으며, 선박이나 해상구조물의 전반적인 설계방법론(design methodology)을 개선코자 한다면 ''생산지향적 설계(Design for Production)''의 근간이 되는 선박건조전략(build strategy), 구매정책(purchasing policy)과 생산기술(production technology)에 대한 폭넓은 지식이 한데 어우러져야 한다. 최근에는 CIMS의 일부분에서 보는 바와 같은 경영관리, 설계 및 생산지원 시스템의 도입으로 이와 같은 설계 프로세스의 추진을 가능케하고 있다. 이와 병행하여 설계를 지원하기 위한 전산기술, 특히 대화형 화상처리기술(interactive graphics)의 발달은 설계자가 선박의 형상이나 구조 배치를 여러 가지로 변화시켜 가면서 눈으로 즉시 확인할 수 있도록 설계자의 능력을 배가시키는데 크게 기여하고 있다. 여러 가지의 설계안(alternative design arrangement)을 신속히 만들어내고 이를 즉시 검토 평가할 수 있는 능력을 초기설계 단계에서 가질 수 있다면 이는 분명히 큰 장점일 것이며, 더구나 설계초기 단계에 생산관련인자를 설계에서 고려할 수 있다면 이는 더욱 두드러진 발전일 것이다. 생산공법과 관련생산 비용을 정확히 반영한 각 가지의 설계안을 짧은 시간내에 검토하고 생산소요 비용을 산출하여 비교함으로써, 수주계약단계에서 실제적인 생산공법과 신뢰성있는 생산실적자료를 기준으로 하여 총 건조비(total production cost)를 최소로 하는 최적의 설계를 선택할 수 있도록 해 줄 것이다. 이제 이와 같은 새로운 설계도구(design tool)를 제공해 주므로써 초기설계에 각종 생산관련 정보나 지식 및 실적자료가 반영가능토록 발전되었다. 본 논문은 영국의 뉴카슬대학교(Univ. of Newcastle upon Type)에서 위에 언급한 특징들을 반영하여 새로운 선박구조 설계 방법을 개발한 연구결과를 보여주고 있다. 본 선계연구는 5단계로 구분되는데; (1) 컴퓨터 그라픽스를 이용하고 생산정보 데이타베이스와 연결시켜 구조형상(geometry)을 정의하고 구조부재 칫수(scantling) 계산/결정 (2) 블럭 분할(block division) 및 강재 배치(panel arrangement)의 확정을 위해 생산기술 및 건조방식에 대한 정보 제공 (3) 상기 (1) 및 (2)를 활용하여 아래 각 생산 단계에서의 생산작업 분석(work content assessment) a) 생산 준비 단계(Preparation) b) 가공 조립 단계(Fabrication/Assembly) c) 탑재 단계(Erection) (4) 각각의 설계(안)에 대하여 재료비(material cost), 인건비(labour cost) 및 오버헤드 비용(overhead cost)을 산출키 위한 조선소의 생산시설 및 각종 품셈 정보 (5) 총 건조 비용(total production cost)을 산출하여 각각의 설계안을 비교 검토. 본 설계 방식을 산적화물선(Bulk Carrier) 설계에 적용하여 구조배치(structural geometry), 표준화의 정도(levels of standardisation), 구조생산공법(structural topology) 등의 변화에 따른 설계 결과의 민감도를 분석(sensitivity studies)하였다. 전산장비는 설계자의 대화형 접근을 용이하도록 하기 위해 VAX의 화상 처리장치를 이용하여 각 설계안에 대한 구조형상과 작업분석, 건조비 현황 등을 제시할 수 있도록 하였다. 결론적으로 본 연구는 설계초기 단계에서 상세한 건조비 모델(detailed production cost model)을 대화형 화상 처리방법에 접합시켜 이를 이용하여 여러가지 설계안의 도출과 비교검토를 신속히 처리할 수 있도록 함은 물론, 각종 생산 실적정보를 초기설계에 반영하는 최초의 시도라고 믿으며, 생산지향적(Design for Production) 최적설계분야의 발전에 많은 도움이 되기를 기대해 마지 않는다. 참고로 본 시스템의 설계 적용결과를 부록에 요약 소개하며, 상세한 내용은 참고문헌 [4] 또는 [7]을 참조 요망한다.

  • PDF

Transition of Rice Culture Practices during Chosun Dynasty through Old References V. Cultivation and Cropping Patterns (주요 고농서를 통한 조선시대의 도작기술 전개 과정 연구 V. 재배양식)

  • Lee, Sung-Kyum;Guh, Ja-Ok;Lee, Eun-Woong;Lee, Hong-Suk
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.37 no.1
    • /
    • pp.104-115
    • /
    • 1992
  • The rice culture techniques included 'Jodosukyungbeob'(旱稻水耕法 : culture techniques of early-ripening paddy rice), 'Mandosukyungbeob' (晩稻水耕法) : culture techniques of late-Ripening paddy rice 'Handobeob'(旱稻<山稻>法 : culture techniques of upland rice), 'Myojongbeob'(苗種法 : culture techniques of paddy rice by transplanting), 'Kunangbeob'(乾秧法 : culture techniques of rice by transplanting which rears seeding in dry paddy) and 'Sudogunpanongbeob'(水稻乾播農法 : culture techniques of paddy rice seeding in dry field). Especially, 'Kunangbeob' and 'Sudogunpanongbeob' were originally developed in Korea as seen in 1600s(Kyoungje : 經濟) and early 1800s (Yoji : 要旨). In 'Jodosukyungbeob' it took 9 days for seed dipping, water-sprouting and prevent damage by birds, each for 3 days in China, but in Korea seed dipping in water took 3 days and the rest of the procedures were flexibly established. In matured soils, practices were fall plowing right after harvest, recognition of effective tillering and additional fertilization use of human manure, and stimulation of sprouting by lime application. The unique culture techniques adequate for Korean situations were practiced, which included weed control after draining accurately for 3 to 4 times, draining at mid season for improving wind and drought tolerance, rice harvesting at appropriate time for preventing grain shattering, and seeding in rows. 'Mandosukyungbeob' was improved techniques contrast to those of China, and the major contents were selection of proper varieties, good stand establishment by seeding high rates, induction of vigorous tillers, and adoption of 'Jokjongbeob'(足種法 : seeding method by foot). Also, one of the most prominent rice cultures by our ancestors was 'Kunpanongbeob' that was systemized form habitual practice of Pyongan Province. The unique technique actualized was 'Hando [旱稻(山稻)]' culture technique which was the combinations of 'Jokjongbeob', root stimulation method, and disaster-tolerant mixture cropping with adoptation of variety theory, although it was originated from China. The transplanting techniques has come before 'Jikseol'($\ulcorner$直說$\lrcorner$) and its merits were sufficiently realized. However, this method was basically prohibited from the early Chosun dynasty because extremely bad harvest was expected under drought conditions and insufficient conditions of water storage. But, it was permitted in the areas that contained water all the times and in case of large-scale farming especially. Most of rice culture was transplanted in the end of the Chosun dynasty because transplanting was continuously spreaded in the three southern provinces of Korea. Under these circumstances, transplanting technique was improved from the early to the end of the Chosun dynasty by weed control, fertilizing, water management, and quadratic transplanting. Based on these techniques, agricultural productivity was improved 5 times by that time. 'Kunpanongbeob' was created and developed properly for Korean conditions that is dry in early season and flooding in late season. This was successively developed and established into transplanting technique of nursery seedling.

  • PDF

A Study of Decrease Exposure Dose for the Radiotechnologist in PET/CT (PET-CT 검사에서 방사선 종사자 피폭선량 저감에 대한 방안 연구)

  • Kim, Bit-Na;Cho, Suk Won;Lee, Juyoung;Lyu, Kwang Yeul;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.38 no.1
    • /
    • pp.23-30
    • /
    • 2015
  • Positron emission tomography scan has been growing diagnostic equipment in the development of medical imaging system. Compare to 99mTc emitting 140 keV, Positron emission radionuclide emits 511 keV gamma rays. Because of this high energy, it needs to reduce radioactive emitting from patients for radio technologist. We searched the external dose rates by changing distance from patients and measure the external dose rates when we used shielder investigate change external dose rates. In this study, the external dose distribution were analyzed in order to help managing radiation protection of radio technologists. Ten patients were searched (mean age: $47.7{\pm}6.6$, mean height: $165.5{\pm}3.8cm$, mean weight: $65.9{\pm}1.4kg$). Radiation was measured on the location of head, chest, abdomen, knees and toes at the distance of 10, 50, 100, 150, and 200 cm, respectively. Then, all the procedure was given with a portable radiation shielding on the location of head, chest, and abdomen at the distance of 100, 150, and 200 cm and transmittance was calculated. In 10 cm, head ($105.40{\mu}Sv/h$) was the highest and foot($15.85{\mu}Sv/h$) was the lowest. In 200 cm, head, chest, and abdomen showed similar. On head, the measured dose rates were $9.56{\mu}Sv/h$, $5.23{\mu}Sv/h$, and $3.40{\mu}Sv/h$ in 100, 150, and 200 cm, respectively. When using shielder, it shows $2.24{\mu}Sv/h$, $1.67{\mu}Sv/h$, and $1.27{\mu}Sv/h$ in 100, 150, and 200 cm on head. On chest, the measured dose rates were $8.54{\mu}Sv/h$, $4.90{\mu}Sv/h$, $3.44{\mu}Sv/h$ in 100, 150, and 200 cm, respectively. When using shielder, it shows $2.27{\mu}Sv/h$, $1.34{\mu}Sv/h$, and $1.13{\mu}Sv/h$ in 100, 150, and 200 cm on chest. On abdomen, the measured dose rates were $9.83{\mu}Sv/h$, $5.15{\mu}Sv/h$, and $3.18{\mu}Sv/h$ in 100, 150, and 200 cm, respectively. When using shielder, it shows $2.60{\mu}Sv/h$, $1.75{\mu}Sv/h$, and $1.23{\mu}Sv/h$ in 100, 150, and 200 cm on abdomen. Transmittance was increased as the distance was expanded. As the distance was further, the radiation dose were reduced. When using shielder, the dose were reduced as one-forth of without shielder. The Radio technologists are exposed of radioactivity and there were limitations on reducing the distance with Therefore, the proper shielding will be able to decrease radiation dose to the technologists.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.