• Title/Summary/Keyword: 논리적 설계

Search Result 891, Processing Time 0.028 seconds

Evaluation of the Measurement Uncertainty from the Standard Operating Procedures(SOP) of the National Environmental Specimen Bank (국가환경시료은행 생태계 대표시료의 채취 및 분석 표준운영절차에 대한 단계별 측정불확도 평가 연구)

  • Lee, Jongchun;Lee, Jangho;Park, Jong-Hyouk;Lee, Eugene;Shim, Kyuyoung;Kim, Taekyu;Han, Areum;Kim, Myungjin
    • Journal of Environmental Impact Assessment
    • /
    • v.24 no.6
    • /
    • pp.607-618
    • /
    • 2015
  • Five years have passed since the first set of environmental samples was taken in 2011 to represent various ecosystems which would help future generations lead back to the past environment. Those samples have been preserved cryogenically in the National Environmental Specimen Bank(NESB) at the National Institute of Environmental Research. Even though there is a strict regulation (SOP, standard operating procedure) that rules over the whole sampling procedure to ensure each sample to represent the sampling area, it has not been put to the test for the validation. The question needs to be answered to clear any doubts on the representativeness and the quality of the samples. In order to address the question and ensure the sampling practice set in the SOP, many steps to the measurement of the sample, that is, from sampling in the field and the chemical analysis in the lab are broken down to evaluate the uncertainty at each level. Of the 8 species currently taken for the cryogenic preservation in the NESB, pine tree samples from two different sites were selected for this study. Duplicate samples were taken from each site according to the sampling protocol followed by the duplicate analyses which were carried out for each discrete sample. The uncertainties were evaluated by Robust ANOVA; two levels of uncertainty, one is the uncertainty from the sampling practice, and the other from the analytical process, were then compiled to give the measurement uncertainty on a measured concentration of the measurand. As a result, it was confirmed that it is the sampling practice not the analytical process that accounts for the most of the measurement uncertainty. Based on the top-down approach for the measurement uncertainty, the efficient way to ensure the representativeness of the sample was to increase the quantity of each discrete sample for the making of a composite sample, than to increase the number of the discrete samples across the site. Furthermore, the cost-effective approach to enhance the confidence level on the measurement can be expected from the efforts to lower the sampling uncertainty, not the analytical uncertainty. To test the representativeness of a composite sample of a sampling area, the variance within the site should be less than the difference from duplicate sampling. For that, a criterion, ${i.e.s^2}_{geochem}$(across the site variance) <${s^2}_{samp}$(variance at the sampling location) was proposed. In light of the criterion, the two representative samples for the two study areas passed the requirement. In contrast, whenever the variance of among the sampling locations (i.e. across the site) is larger than the sampling variance, more sampling increments need to be added within the sampling area until the requirement for the representativeness is achieved.

The verdict category and legal decision: Focused on the role of representation of 'innocent' (평결범주와 일반인의 법적판단: '무죄표상'의 역할을 중심으로)

  • Han, Yuhwa
    • Korean Journal of Forensic Psychology
    • /
    • v.13 no.1
    • /
    • pp.1-22
    • /
    • 2022
  • This study tested the effect of the verdict category of lay-participation trial in Korea on the legal decision of layperson and the role of representation of 'innocent' in the process. Representation of 'innocent' refers to a psychological threshold for deciding someone's innocence (no fault or sin) in a general sense. The functions as a threshold for a legal decision of 'beyond a reasonable doubt (BRD)' and the individual threshold (IT), regarded as a standard for judgment of guilt established by law and an estimate of an individual's threshold, respectively, were compared. This study used a 2×2 complete factorial design in which the verdict category (guilty/innocent vs. guilty/not guilty) and the defendant's likelihood of guilt (low vs. high) were manipulated. Data from 137 lay-people who voluntarily participated in the online experiment was analyzed. The experiment's procedure was in the order of measuring 'representation of innocent' and the likelihood of guilt of an accused, presenting one of four trial vignettes, and obtaining legal decisions (verdict confidence and estimation of the likelihood of guilt for the defendant). As a result, it was found that the verdict category did not significantly affect the legal decision of layperson. However, the guilty verdict rate of the 'guilty/innocent' condition tended to be higher than those of the 'guilty/not guilty' condition. The layperson's representation of 'innocent' and the verdict category had an interaction effect on the difference between BRD and IT (threshold change) at the significance level of .1. In the 'guilty/innocent' condition, the threshold change varying with layperson's representation of 'innocent' was larger than in the 'guilty/not guilty' condition. In comparing the function of BRD and IT, IT significantly predicted the lay person's legal decision at the significance level of .1 by interacting with the likelihood of guilt for the defendant. Therefore, it could be said that IT was a better threshold estimator than BRD. The implication of this study is that it provided experimental evidence for the effect of the verdict category of lay-participation trial in Korea, which is a problem often raised among lawyers, and suggested logical reasoning and empirical grounds for the psychological mechanism of the possible effect.

Geologic Map Data Model (지질도 데이터 모델)

  • Yeon, Young-Kwang;Han, Jong-Gyu;Lee, Hong-Jin;Chi, Kwang-Hoon;Ryu, Kun-Ho
    • Economic and Environmental Geology
    • /
    • v.42 no.3
    • /
    • pp.273-282
    • /
    • 2009
  • To render more valuable information, a spatial database is being constructed from digitalized maps in the geographic areas. Transferring file-based maps into a spatial database, facilitates the integration of larger databases and information retrieval using database functions. Geological mapping is the graphical interpretation results of the geological phenomenon by geological surveyors, which is different from other thematic maps produced quantitatively. These features make it difficult to construct geologic databases needing geologic interpretation about various meanings. For those reasons, several organizations in the USA and Australia are suggesting the data model for the database construction. But, it is hard to adapt to a domestic environment because of the representation differences of geological phenomenon. This paper suggests the data model adaptive in domestic environment analyzing 1:50,000 scales of geologic maps and more detailed mine geologic maps. The suggested model is a logical data model for the ArcGIS GeoDatabase. Using the model it can be efficiently applicable in the 1:50,000 scales of geological maps. It is expected that the geologic data model suggested in this paper can be used for integrated use and efficient management of geologic maps.

Comparison of Integrated Health and Welfare Service Provision Projects Centered on Medical Institutions (의료기관 중심 보건의료·복지 통합 서비스 제공 사업 비교)

  • Su-Jin Lee;Jong-Yeon Kim
    • Journal of agricultural medicine and community health
    • /
    • v.49 no.2
    • /
    • pp.132-145
    • /
    • 2024
  • Objectives: This study compares cases of Dalgubeol Health Care Project, 301 Network Project, and 3 for 1 Project based on program logic models to derive measures for promoting integrated healthcare and welfare services centered around medical institutions. Methods: From January to December 2021, information on the implementation systems and performance of each institution was collected. Data sources included prior academic research, project reports, operational guidelines, official press releases, media articles, and written surveys from project managers. A program logic model analysis framework was applied, structuring the information based on four elements: situation, input, activity, and output. Results: All three projects aimed to address the fragmentation of health and welfare services and medical blind spots. Despite similar multidisciplinary team compositions, differences existed in specific fields, recruitment scale, and employment types. Variations in funding sources led to differences in community collaboration, support methods, and future directions. There were discrepancies in the number of beneficiaries and medical treatments, with different results observed when comparing the actual number of people to input manpower and project cost per beneficiary. Conclusions: To design an integrated health and welfare service provision system centered on medical institutions, securing a stable funding mechanism and establishing an appropriate target population and service delivery system are crucial. Additionally, installing a dedicated department within the medical institution to link activities across various sectors, rather than outsourcing, is necessary. Ensuring appropriate recruitment and stable employment systems is needed. A comprehensive provision system offering services from mild to severe cases through public-private cooperation is suggested.

Modeling and Intelligent Control for Activated Sludge Process (활성슬러지 공정을 위한 모델링과 지능제어의 적용)

  • Cheon, Seong-pyo;Kim, Bongchul;Kim, Sungshin;Kim, Chang-Won;Kim, Sanghyun;Woo, Hae-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.10
    • /
    • pp.1905-1919
    • /
    • 2000
  • The main motivation of this research is to develop an intelligent control strategy for Activated Sludge Process (ASP). ASP is a complex and nonlinear dynamic system because of the characteristic of wastewater, the change in influent flow rate, weather conditions, and etc. The mathematical model of ASP also includes uncertainties which are ignored or not considered by process engineer or controller designer. The ASP is generally controlled by a PID controller that consists of fixed proportional, integral, and derivative gain values. The PID gains are adjusted by the expert who has much experience in the ASP. The ASP model based on $Matlab^{(R)}5.3/Simulink^{(R)}3.0$ is developed in this paper. The performance of the model is tested by IWA(International Water Association) and COST(European Cooperation in the field of Scientific and Technical Research) data that include steady-state results during 14 days. The advantage of the developed model is that the user can easily modify or change the controller by the help of the graphical user interface. The ASP model as a typical nonlinear system can be used to simulate and test the proposed controller for an educational purpose. Various control methods are applied to the ASP model and the control results are compared to apply the proposed intelligent control strategy to a real ASP. Three control methods are designed and tested: conventional PID controller, fuzzy logic control approach to modify setpoints, and fuzzy-PID control method. The proposed setpoints changer based on the fuzzy logic shows a better performance and robustness under disturbances. The objective function can be defined and included in the proposed control strategy to improve the effluent water quality and to reduce the operating cost in a real ASP.

  • PDF

Development of Rule-Set Definition for Architectural Design Code Checking based on BIM - for Act on the Promotion and Guarantee of Access for the Disabled, the Aged, and Pregnant Women to Facilities and Information - (BIM 기반의 건축법규검토를 위한 룰셋 정의서 개발 - 장애인,노인,임산부 등의 편의증진 보장에 관한 법률 대상으로 -)

  • Kim, Yuri;Lee, Sang-Hya;Park, Sang-Hyuk
    • Korean Journal of Construction Engineering and Management
    • /
    • v.13 no.6
    • /
    • pp.143-152
    • /
    • 2012
  • As the Public Procurement Service announced the compulsory of BIM adaption in every public construction from 2016, the importance of BIM is increasing. Besides, automatic code checking takes significance in terms of the quality control for BIM based design. In this study, rule-sets were defined for Act on the Promotion and Guarantee of Access for the Disabled, the Aged, and Pregnant Women to Facilities and Information. Three analytic steps were suggested to shortlist the objective clauses from the entire code; the frequency analysis using project reviews for architectural code compliance, the clause analysis on quantifiability, and the analysis for model checking possibilities. The shortlisted clauses were transformed into the machine readable rule-set definition. A case study was conducted to verify the adaptiveness and consistency of rule-set definitions. In future study, it is required the methodologies of selecting objective clauses to be specified and its indicators to be quantified. Also case studies should be performed to determine the pre-conditions in modeling and to check interoperability issues and other possible errors in models.

An I/O Interface Circuit Using CTR Code to Reduce Number of I/O Pins (CTR 코드를 사용한 I/O 핀 수를 감소 시킬 수 있는 인터페이스 회로)

  • Kim, Jun-Bae;Kwon, Oh-Kyong
    • Journal of the Korean Institute of Telematics and Electronics D
    • /
    • v.36D no.1
    • /
    • pp.47-56
    • /
    • 1999
  • As the density of logic gates of VLSI chips has rapidly increased, more number of I/O pins has been required. This results in bigger package size and higher packager cost. The package cost is higher than the cost of bare chips for high I/O count VLSI chips. As the density of logic gates increases, the reduction method of the number of I/O pins for a given complexity of logic gates is required. In this paper, we propose the novel I/O interface circuit using CTR (Constant-Transition-Rate) code to reduce 50% of the number of I/O pins. The rising and falling edges of the symbol pulse of CTR codes contain 2-bit digital data, respectively. Since each symbol of the proposed CTR codes contains 4-bit digital data, the symbol rate can be reduced by the factor of 2 compared with the conventional I/O interface circuit. Also, the simultaneous switching noise(SSN) can be reduced because the transition rate is constant and the transition point of the symbols is widely distributed. The channel encoder is implemented only logic circuits and the circuit of the channel decoder is designed using the over-sampling method. The proper operation of the designed I/O interface circuit was verified using. HSPICE simulation with 0.6 m CMOS SPICE parameters. The simulation results indicate that the data transmission rate of the proposed circuit using 0.6 m CMOS technology is more than 200 Mbps/pin. We implemented the proposed circuit using Altera's FPGA and confimed the operation with the data transfer rate of 22.5 Mbps/pin.

  • PDF

A Visual Programming Environment for Medical Image Processing (의료영상처리를 위한 시각 프로그래밍 환경)

  • Sung, Chong-Won;Kim, Jin-Ho;Kim, Jee-In
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2349-2360
    • /
    • 2000
  • In medical image processing, if new technologies arc developed, they arc applied to real clinical cases. The results are to be analyzed by doctors to improve the new technologies. So, it is important for doctors to have a tool that helps the doctors in applying the new technologies to clinical cases and analyzing the clinical results. In this paper, we design and implement a visual programming environment where non-programming experts, such as medical doctors, can easily compose a medical image processing application program. A set of image processing functions are implemented and represented as icons. Thc user selects functions by clicking correslxmding icons. The users can easily find necessary' functions from the visualized library. A user selects a function from the visualized library and [Jut the function node into a canvas of Visual Programming Interface. The user connects nodes to compose a dataflow diagram. The connected dataflow diagram shows the now of the program. Hyperbolic Tree is helpful in visualizing a set of function icons in a single screen because it provides both the whole stmcture of the function Iihrary and the details of the focused functions at the same time. We also developed a CUI builder where the user interfaces of the medical image processing applications are composed. Therefore. non'programming experts such as physicians can apply new medical image processing algorithms to clinical cases without performing complex computer programming procedures.

  • PDF

A Study on the Pixel-Paralled Image Processing System for Image Smoothing (영상 평활화를 위한 화소-병렬 영상처리 시스템에 관한 연구)

  • Kim, Hyun-Gi;Yi, Cheon-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.11
    • /
    • pp.24-32
    • /
    • 2002
  • In this paper we implemented various image processing filtering using the format converter. This design method is based on realized the large processor-per-pixel array by integrated circuit technology. These two types of integrated structure are can be classify associative parallel processor and parallel process DRAM(or SRAM) cell. Layout pitch of one-bit-wide logic is identical memory cell pitch to array high density PEs in integrate structure. This format converter design has control path implementation efficiently, and can be utilize the high technology without complicated controller hardware. Sequence of array instruction are generated by host computer before process start, and instructions are saved on unit controller. Host computer is executed the pixel-parallel operation starting at saved instructions after processing start. As a result, we obtained three result that 1)simple smoothing suppresses higher spatial frequencies, reducing noise but also blurring edges, 2) a smoothing and segmentation process reduces noise while preserving sharp edges, and 3) median filtering, like smoothing and segmentation, may be applied to reduce image noise. Median filtering eliminates spikes while maintaining sharp edges and preserving monotonic variations in pixel values.

Double Encryption of Digital Hologram Based on Phase-Shifting Digital Holography and Digital Watermarking (위상 천이 디지털 홀로그래피 및 디지털 워터마킹 기반 디지털 홀로그램의 이중 암호화)

  • Kim, Cheol-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2017
  • In this Paper, Double Encryption Technology Based on Phase-Shifting Digital Holography and Digital Watermarking is Proposed. For the Purpose, we First Set a Logo Image to be used for Digital Watermark and Design a Binary Phase Computer Generated Hologram for this Logo Image using an Iterative Algorithm. And Random Generated Binary Phase Mask to be set as a Watermark and Key Image is Obtained through XOR Operation between Binary Phase CGH and Random Binary Phase Mask. Object Image is Phase Modulated to be a Constant Amplitude and Multiplied with Binary Phase Mask to Generate Object Wave. This Object Wave can be said to be a First Encrypted Image Having a Pattern Similar to the Noise Including the Watermark Information. Finally, we Interfere the First Encrypted Image with Reference Wave using 2-step PSDH and get a Good Visible Interference Pattern to be Called Second Encrypted Image. The Decryption Process is Proceeded with Fresnel Transform and Inverse Process of First Encryption Process After Appropriate Arithmetic Operation with Two Encrypted Images. The Proposed Encryption and Decryption Process is Confirmed through the Computer Simulations.