• Title/Summary/Keyword: Standard field size

Search Result 296, Processing Time 0.026 seconds

A Characteristic Study on Shear Strength of Reinforced Concrete Beams according to Longitudinal Reinforcement Ratio and Size Effect (철근콘크리트보의 인장철근비와 크기효과에 의한 전단강도 특성 연구)

  • Yu, In-Geun;Noh, Hyung-Jin;Lee, Ho-Kyung;Baek, Seung-Min;Kim, Woo-Suk;Kwak, Yoon-Keun
    • Journal of the Architectural Institute of Korea Structure & Construction
    • /
    • v.36 no.2
    • /
    • pp.117-126
    • /
    • 2020
  • The main objective of this experimental study is to investigate shear strength of reinforced concrete beams according to longitudinal reinforcement ratio (ρ) and size effect. In order to find out the shear strength according to the tensile reinforcement ratio, in particular, the main variables are 100%, 75% and 50% of ρ=0.01 which is widely used in construction field. A total of twelve RC beams were tested under 4-point loading conditions. In addition to the existing proposal equations, the theoretical values such as KBC and ACI equations are compared with the experimental data. Through this analysis, this study is designed to provide more reasonable equations for shear design of reinforced concrete beams. When shear reinforcement bar spacing of nine specimens (R*-1, R*-2, and R*-3 series) fixed as d/s=2.0 and three specimens of R*-4 series fixed as d/s=1.5 are compared, the shear strength of two groups showed similar values. As a result, the current standard of d/s=2.0 for shear reinforcement bar spacing may be somewhat alleviated.

Effect of Soil Sample Pretreatment Methods on Total Heavy Metal Concentration (토양 시료조제 방법이 총중금속 농도에 미치는 영향)

  • Kim, Jung-Eun;Ji, Won Hyun
    • Journal of Soil and Groundwater Environment
    • /
    • v.27 no.4
    • /
    • pp.63-74
    • /
    • 2022
  • In analyzing heavy metals in soil samples, the standard protocol established by Korean Minstry of Environment (KSTM) requires two different pretreatments (A and B) based on soil particle size. Soil particles < 0.15 mm in diameter after sieving are directly processed into acid extraction (method A). However, if the quantity of soil particles < 0.15 mm are not enough, grinding of the particles within 0.15 mm ~ 2 mm is required (method B). Grinding is often needed for some field samples, especially for the soil samples retrieved from soil washing process that contain relatively large-sized soil grains. In this study, two soil samples with different particle size distribution were prepared and analyzed for heavy metals concentrations using two different pretreatment to investigate the effect of grinding. The results showed that heavy metal concentrations tend to increase with the increase of the fraction of small-sized particles. In comparison of the two pretreatments, pretreatment A yielded higher heavy metal concentration than pretreatment B, indicating significant influence of grinding on analytical results. This results suggest that the analytical values of heavy metals in soil samples obtained by KSTM should be taken with caution and carefully reviewed.

Clinical Application of Wedge Factor (Wedge Factor의 임상적 응용)

  • Choi Dong-Rak;Ahn Yong-Chan;Huh Seung Jae
    • Radiation Oncology Journal
    • /
    • v.13 no.3
    • /
    • pp.291-296
    • /
    • 1995
  • Purpose : In general, the wedge factors which are used clinical practices are ignored of dependency on field sizes and depths. In this present, we investigated systematically the depth and field size dependency to determine the absorbed dose more accurately. Methods : The wedge factors for each wedge filter were measured at various depths (depth of Dmax, 5cm, 10cm, and 15cm) and field sizes ($5cm{\times}5cm,\;10cm{\times}10cm,\;15cm{\times}15cm, and 20cm{\times}20cm$) by using 4-, 6-, and 10-MVX rays. By convention, wedge factors are determined by taking the ratio of the central axis ionization readings when the wedge filter is in place to those of the open field in same field size and measurement depth. In this present work, we determined the wedge factors for 4-, 6-, and 10-MV X rays from Clinac 600C and 2100C linear accelerators (manufactured by Varian Associates, Inc., Palo Alto, CA). To confirm that the wedge was centered, measurements were done with the two possible wedge position and various collimator orientations. Results : The standard deviations of measured values are within $0.3\;\%$ and the depth dependence of wedge factor is greater for the lower energies. Especially, the variation of wedge factor is no less than $5\%$ for 4- and 6- MV X rays with more than $45^{\circ}$ wedge filters. But there seems to be a small dependence on field size. Conclusion : The results of this study show a dependence on the point of measurement. There also seems to be a small dependence on field size. And so, we should consider the depth and field size dependence in determining the wedge factors. If one wedge factor were to be used for each wedge filter it seems that the measurement for a 10cm x 10cm field size at a depth of loom would be a reasonable choice.

  • PDF

The Development of Probabilistic Time and Cost Data: Focus on field conditions and labor productivity

  • Hyun, Chang-Taek;Hong, Tae-Hoon;Ji, Soung-Min;Yu, Jun-Hyeok;An, Soo-Bae
    • Journal of Construction Engineering and Project Management
    • /
    • v.1 no.1
    • /
    • pp.37-43
    • /
    • 2011
  • Labor productivity is a significant factor associated with controlling time, cost, and quality. Many researchers have developed models to define methods of measuring the relationship between productivity and various parameters such as the size of working area, maximum working hours, and the crew composition. Most of the previous research has focused on estimating productivity; however, this research concentrates on estimating labor productivity and developing time and cost data for repetitive concrete pouring activity. In Korea, "Standard Estimating" only entails the average productivity data of the construction industry, and it is difficult to predict the time and cost spent on any particular project. As a result, errors occur in estimating duration and cost for individual activities or projects. To address these issues, this research sought to collect data, measure productivity, and develop time and cost data using labor productivity based on field conditions from the collected data. A probabilistic approach is also proposed to develop data. A case study is performed to validate this process using actual data collected from construction sites. It is possible that the result will be used as the EVMS baseline of cost management and schedule management.

PROBABILISTIC MODEL-BASED APPROACH FOR TIME AND COST DATA : REGARDING FIELD CONDITIONS AND LABOR PRODUCTIVITY

  • ChangTaek Hyun;TaeHoon Hong;SoungMin Ji;JunHyeok Yu;SooBae An
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.256-261
    • /
    • 2011
  • Labor productivity is a significant factor related to control time, cost, and quality. Many researchers have developed models to define method of measuring the relationship between productivity and various constraints such as the size of working area, maximum working hours, and the crew composition. Most of the previous research has focused on estimating productivity; however, this research concentrates on estimating labor productivity and developing time and cost data for repetitive concrete pouring activity. In Korea, "Standard Estimating" only contains the average productivity data of the construction industry, and it is difficult to predict the time and cost of any particular project; hence, there are some errors in estimating duration and cost for individual activity and project. To address these issues, this research collects data, measures productivity, and develops time and cost data using labor productivity based on field conditions from the collected data. A probabilistic approach is also proposed to develop data. A case study is performed to validate this process using actual data collected from construction sites and it is possible that the result will be used as the EVMS baseline of cost management and schedule management.

  • PDF

A Study on Optimum Tree Planting Density for Apartment Complex (아파트단지 조경수 적정식재밀도 연구)

  • Oh, Choong-Hyeon;Jeong, Wook-Ju;Lee, Im-Kyu;Kim, Min-Kyung;Park, Eun-Ha
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.6
    • /
    • pp.140-147
    • /
    • 2012
  • This study was conducted to investigate optimum planting density for apartment complex. The validity of Landscape Architecture Criteria of Korea was checked for it. We compared our field data with Landscape Architecture Criteria. In this step, the tree density of urban forest was regarded as standard. Field study was examined in 3 apartment complexes located in capital area, especially completed during these 10 years. 10 sites in each complex were selected and tree density per unit area were calculated. This field study data was divided standard size and large size which received weight. And, it was compared and analyzed. And crown projected area(CPA) was calculated considering proper growth of low vegetation and sufficient shade. The outcome shows that minimum size of Landscape Architecture Criteria is rational. But, in the case of planting large size tree received weight, tree density was short comparing with the tree density of urban forest and CPA was less than 50%. By the result of field study in 3 apartment complex, the tree density of apartment complex satisfied or exceeded Landscape Architecture Criteria. But, in the case of planting large size tree, tree density and CPA show high density due to addition planting for deficient landscape. Therefore, the revision of the Landscape Architecture Criteria was required such as deletion or minimization of the weighted clause about the large size tree and regulate the limit CPA not less than 50% and not more than 100%.

Medical Image Compression Using JPEG International Standard (JPEG 표준안을 이용한 의료 영상 압축)

  • Ahn, Chang-Beom;Han, Sang-Woo;Kim, Il-Yoen
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.504-506
    • /
    • 1993
  • The Joint Photographic Experts Group (JPEG) standard was proposed by the International Standardization Organization (ISO/SC 29/WG 10) and the CCITT SG VIII as an international standard for digital continuous-tone still image compression. The JPEG standard has been widely accepted in electronic imaging, computer graphics, and multi-media applications, however, due to the lossy character of the JPEG compression its application in the field of medical imaging has been limited. In this paper, the JPEG standard was applied to a series of head sections of magnetic resonance (MR) images (256 gray levels, $256{\times}256$ size) and its performance was investigated. For this purpose, DCT-based sequential mode of the JPEG standard was implemented using the CL550 compression chip and progressive and lossless coding was implemented by software without additional hardware. From the experiment, it appears that the compression ratio of about 10 to 20 was obtained for the MR images without noticeable distortion. It is also noted that the error signal between the reconstructed image by the JPEG and the original image was nearly random noise without causing any special-pattern-related artifact. Although the coding efficiency of the progressive and hierarchical coding is identical to that of the sequential coding in compression ratio and SNR, it has useful features In fast search of patient Image from huge image data base and in remote diagnosis through slow public communication channel.

  • PDF

The characteristics on dose distribution of a large field (넓은 광자선 조사면($40{\times}40cm^2$ 이상)의 선량분포 특성)

  • Lee Sang Rok;Jeong Deok Yang;Lee Byoung Koo;Kwon Young Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.15 no.1
    • /
    • pp.19-27
    • /
    • 2003
  • I. Purpose In special cases of Total Body Irradiation(TBI), Half Body Irradiation(HBI), Non-Hodgkin's lymphoma, E-Wing's sarcoma, lymphosarcoma and neuroblastoma a large field can be used clinically. The dose distribution of a large field can use the measurement result which gets from dose distribution of a small field (standard SSD 100cm, size of field under $40{\times}40cm2$) in the substitution which always measures in practice and it will be able to calibrate. With only the method of simple calculation, it is difficult to know the dose and its uniformity of actual body region by various factor of scatter radiation. II. Method & Materials In this study, using Multidata Water Phantom from standard SSD 100cm according to the size change of field, it measures the basic parameter (PDD,TMR,Output,Sc,Sp) From SSD 180cm (phantom is to the bottom vertically) according to increasing of a field, it measures a basic parameter. From SSD 350cm (phantom is to the surface of a wall, using small water phantom. which includes mylar capable of horizontal beam's measurement) it measured with the same method and compared with each other. III. Results & Conclusion In comparison with the standard dose data, parameter which measures between SSD 180cm and 350cm, it turned out there was little difference. The error range is not up to extent of the experimental error. In order to get the accurate data, it dose measures from anthropomorphous phantom or for this objective the dose measurement which is the possibility of getting the absolute value which uses the unlimited phantom that is devised especially is demanded. Additionally, it needs to consider ionization chamber use of small volume and stem effect of cable by a large field.

  • PDF

Particle deposition on a rotating disk in application to vapor deposition process (VAD) (VAD공정 관련 회전하는 원판으로의 입자 부착)

  • Song, Chang-Geol;Hwang, Jeong-Ho
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.22 no.1
    • /
    • pp.61-69
    • /
    • 1998
  • Vapor Axial Deposition (VAD), one of optical fiber preform fabrication processes, is performed by deposition of submicron-size silica particles that are synthesized by combustion of raw chemical materials. In this study, flow field is assumed to be a forced uniform flow perpendicularly impinging on a rotating disk. Similarity solutions obtained in our previous study are utilized to solve the particle transport equation. The particles are approximated to be in a polydisperse state that satisfies a lognormal size distribution. A moment model is used in order to predict distributions of particle number density and size simultaneously. Deposition of the particles on the disk is examined considering convection, Brownian diffusion, thermophoresis, and coagulation with variations of the forced flow velocity and the disk rotating velocity. The deposition rate and the efficiency directly increase as the flow velocity increases, resulting from that the increase of the forced flow velocity causes thinner thermal and diffusion boundary layer thicknesses and thus causes the increase of thermophoretic drift and Brownian diffusion of the particles toward the disk. However, the increase of the disk rotating speed does not result in the direct increase of the deposition rate and the deposition efficiency. Slower flow velocity causes extension of the time scale for coagulation and thus yields larger mean particle size and its geometric standard deviation at the deposition surface. In the case of coagulation starting farther from the deposition surface, coagulation effects increases, resulting in the increase of the particle size and the decrease of the deposition rate at the surface.

Comparison of Center Error or X-ray Field and Light Field Size of Diagnostic Digital X-ray Unit according to the Hospital Grade (병원 등급에 따른 X선조사야와 광조사야 간의 면적 및 중심점 오차 비교)

  • Lee, Won-Jeong;Song, Gyu-Ri;Shin, Hyun-yi
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.245-252
    • /
    • 2020
  • The purpose of this study was intended to recognize the importance of quality control (QC) in order to reduce exposure and improve image quality by comparing the center-point (CP) of according to hospital grade and the difference between X-ray field (XF) and light field (LF) in diagnostic digital X-ray devices. XF and LF size, CP were measured in 12 digital X-ray devices at 10 hospitals located in 00 metropolitan cities. Phantom was made in different width respectively, using 0.8 mm wire after attaching to the standardized graph paper on transparent plastic plate and marked as cross wire in the center of the phantom. After placing the phantom on the table of the digital X-ray device, the images were obtained by shooting it vertically each field of survey. All images were acquired under the same conditions of exposure at distance of 100cm between the focus-detector. XF and LF size, CP error were measured using the picture archiving communication system. data were expressed as mean with standard error and then analyzed using SPSS ver. 22.0. The difference in field between the XF and LF size was the smallest in clinic, followed by university hospitals, hospitals and general hospitals. Based on the university hospitals with the least CP error, there was a statistically significant difference in CP error between university hospitals and clinics (p=0.024). Group less than 36-month after QC had fewer statistical errors than 36-month group (0.26 vs. 0.88, p=0.036). The difference between the XF and LF size was the lowest in clinic and CP error was the lowest in university hospital. Moreover, hospitals with short period of time after QC have fewer CP error and it means that introduction of timely QC according to the QC items is essential.