• Title/Summary/Keyword: Reference shape

Search Result 688, Processing Time 0.026 seconds

A Case Study for Simulation of a Debris Flow with DEBRIS-2D at Inje, Korea (DEBRIS-2D를 이용한 인제지역 토석류 산사태 거동모사 사례 연구)

  • Chae, Byung-Gon;Liu, Ko-Fei;Kim, Man-Il
    • The Journal of Engineering Geology
    • /
    • v.20 no.3
    • /
    • pp.231-242
    • /
    • 2010
  • In order to assess applicability of debris flow simulation on natural terrain in Korea, this study introduced the DEBRIS-2D program which had been developed by Liu and Huang (2006). For simulation of large debris flows composed of fine and coarse materials, DEBRIS-2D was developed using the constitutive relation proposed by Julien and Lan (1991). Based on the theory of DEBRIS-2D, this study selected a valley where a large debris flow was occurred on July 16th, 2006 at Deoksanri, Inje county, Korea. The simulation results show that all mass were already flowed into the stream at 10 minutes after starting. In 10minutes, the debris flow reached the first geological turn and an open area, resulting in slow velocity and changing its flow direction. After that, debris flow started accelerating again and it reached the village after 40 minutes. The maximum velocity is rather low between 1 m/sec and 2 m/sec. This is the reason why debris flow took 50 minutes to reach the village. The depth change of debris flow shows enormous effect of the valley shape. The simulated result is very similar to what happened in the field. It means that DEBRIS-2D program can be applied to the geologic and topographic conditions in Korea without large modification of analysis algorithm. However, it is necessary to determine optimal reference values of Korean geologic and topographic properties for more reliable simulation of debris flows.

Manufacturing Techniques and the Conservation Treatment of Chimi - (Ridge-end tile) Excavated from the Beopcheonsa Temple Site, Wonju - (원주 법천사지 토제 치미의 제작기법과 보존처리)

  • Lee, Seung Gang;Jo, Seong Yeon;Huh, Il Kwon
    • Journal of Conservation Science
    • /
    • v.35 no.5
    • /
    • pp.518-527
    • /
    • 2019
  • This investion studies the manufacturing techniques of chimi(ridge-end roof tiles based on the) fragments excavated from the Wonju Beopcheonsa temple site(Historic site No. 466) and aids in the conservation of the fragments. The results of the investigation are categorized into the production of the body parts, the wing and the feather attachment, the production of the decorative parts, the scratches in the upper and lower part, the perforations connecting the upper and lower parts, and the formative features(bending phenomenon). The procedures in the conservation treatment of the chimi was performed in a sequential order beginning with a preliminary examination, followed by the removal of foreign substances, coating, joining and restoration, and color retouching. A three-dimensional scanning data was employed to restore the missing parts after adhesion to determine the location, size, and angle of the original shape. The restored chimi measures 118 cm in height and weighs 121 kg, which makes it the fifth largest in size among any chimi(including restored) in Korea. We expect that the pointed feathers will make the chimi from the Beopcheonsa temple site a rare reference as no specimens with these features have been found in Korea until now.

A Study on Abalone Young Shells Counting System using Machine Vision (머신비전을 이용한 전복 치패 계수에 관한 연구)

  • Park, Kyung-min;Ahn, Byeong-Won;Park, Young-San;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.4
    • /
    • pp.415-420
    • /
    • 2017
  • In this paper, an algorithm for object counting via a conveyor system using machine vision is suggested. Object counting systems using image processing have been applied in a variety of industries for such purposes as measuring floating populations and traffic volume, etc. The methods of object counting mainly used involve template matching and machine learning for detecting and tracking. However, operational time for these methods should be short for detecting objects on quickly moving conveyor belts. To provide this characteristic, this algorithm for image processing is a region-based method. In this experiment, we counted young abalone shells that are similar in shape, size and color. We applied a characteristic conveyor system that operated in one direction. It obtained information on objects in the region of interest by comparing a second frame that continuously changed according to the information obtained with reference to objects in the first region. Objects were counted if the information between the first and second images matched. This count was exact when young shells were evenly spaced without overlap and missed objects were calculated using size information when objects moved without extra space. The proposed algorithm can be applied for various object counting controls on conveyor systems.

Probabilistic fatigue assessment of rib-to-deck joints using thickened edge U-ribs

  • Heng, Junlin;Zheng, Kaifeng;Kaewunruen, Sakdirat;Zhu, Jin;Baniotopoulos, Charalampos
    • Steel and Composite Structures
    • /
    • v.35 no.6
    • /
    • pp.799-813
    • /
    • 2020
  • Fatigue cracks of rib-to-deck (RD) joints have been frequently observed in the orthotropic steel decks (OSD) using conventional U-ribs (CU). Thickened edge U-rib (TEU) is proposed to enhance the fatigue strength of RD joints, and its effectiveness has been proved through fatigue tests. In-depth full-scale tests are further carried out to investigate both the fatigue strength and fractography of RD joints. Based on the test result, the mean fatigue strength of TEU specimens is 21% and 17% higher than that of CU specimens in terms of nominal and hot spot stress, respectively. Meanwhile, the development of fatigue cracks has been measured using the strain gauges installed along the welded joint. It is found that such the crack remains almost in semi-elliptical shape during the initiation and propagation. For the further application of TEUs, the design curve under the specific survival rate is required for the RD joints using TEUs. Since the fatigue strength of welded joints is highly scattered, the design curves derived by using the limited test data only are not reliable enough to be used as the reference. On this ground, an experiment-numerical hybrid approach is employed. Basing on the fatigue test, a probabilistic assessment model has been established to predict the fatigue strength of RD joints. In the model, the randomness in material properties, initial flaws and local geometries has been taken into consideration. The multiple-site initiation and coalescence of fatigue cracks are also considered to improve the accuracy. Validation of the model has been rigorously conducted using the test data. By extending the validated model, large-scale databases of fatigue life could be generated in a short period. Through the regression analysis on the generated database, design curves of the RD joint have been derived under the 95% survival rate. As the result, FAT 85 and FAT 110 curves with the power index m of 2.89 are recommended in the fatigue evaluation on the RD joint using TEUs in terms of nominal stress and hot spot stress respectively. Meanwhile, FAT 70 and FAT 90 curves with m of 2.92 are suggested in the evaluation on the RD joint using CUs in terms of nominal stress and hot spot stress, respectively.

A Critical Review on Behavioral Economics with a Focus on Prospect Theory and EBA Model (프로스펙트 이론과 속성별 제거모형을 중심으로 한 행동경제학에 대한 비판적 고찰)

  • Won, Jee-Sung
    • Journal of Distribution Science
    • /
    • v.11 no.5
    • /
    • pp.63-76
    • /
    • 2013
  • Purpose - For the past several decades, behavioral economics or behavioral decision theory has undergone rapid development. This study provides a critical review of the development of behavioral economics with a focus on what are deemed to be core theories in the field. Starting from the utility function proposed by Daniel Bernoulli in the 18th century, the development history of utility functions until the emergence of the prospect theory is thoroughly reviewed. Some of the experimental results violating the traditionally assumed utility function and supporting the prospect theory value function are summarized. The most representative principles of rational choice are transitivity, independence from irrelevant alternatives (IIA), and regularity. The development of behavioral economics has been triggered by finding counter-examples to these principles. Some of the choice behaviors discussed in this study as counter-examples to the traditional theories of rational choice are the St. Petersburg paradox; the Allais paradox; gambling behavior; and the various context effects including the similarity effect, attraction effect, and the compromise effect. The Elimination-by-Aspects (EBA) model, which was proposed as an explanation for the similarity effect, is discussed in detail as well. Based on the literature review and further analysis, this study summarizes the relationship between the context effects, prospect theory, and EBA model. Research design, data, and methodology - This study provides an extensive literature review on several important theories in the field of behavioral decision theory and adds some critical comments to the theories and the relationships among them. This study first reviews the development of utility functions. Daniel Bernoulli introduced the concept of utility function to solve the St. Petersburg paradox. In the mid-20th century, Herbert Simon proposed the "satisficing" heuristic and presented a value function with a shape different from traditional utility functions. This study highlights the strengths and weaknesses of several utility functions proposed until the emergence of the prospect theory value function. Results - This study posits that prospect theory and EBA model are the two most important theories in the field of behavioral decision theory. They can explain various choice behaviors that traditional utility maximization analysis has been unable to. The application of these models to various fields is further increasing nowadays. This study explains how prospect theory and the EBA model can be used to explain the context effects. Conclusions - The traditional economic theory relies on a single variable called "utility" in explaining consumer choice. However, this study argues that, in investigating consumer choice, several other variables should also be considered. These are the similarity among alternatives, an alternative's prototypicality within the category, the dominance relationship between alternatives, and the reference point in evaluating alternatives. Due to the development of behavioral economics, we are now closer to a more complete understanding of consumer choice behavior than in the past when we had only a single tool called utility.

  • PDF

Determination of Minimum Vertex Interval using Shoreline Characteristics (해안선 길이 특성을 이용한 일관된 최소 점간거리 결정 방안)

  • WOO, Hee-Sook;KIM, Byung-Guk;KWON, Kwang-Seok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.169-180
    • /
    • 2019
  • Shorelines should be extracted with consistency because they are the reference for determining the shape of a country. Even in the same area, inconsistent minimum vertex intervals cause inconsistencies in the coastline length, making it difficult to acquire reliable primary data for national policy decisions. As the shoreline length cannot be calculated consistently for shorelines produced by determining the arbitrary distance between points below 1m, a methodology to calculate consistent shoreline length using the minimum vertex interval is proposed herein. To compare our results with the shoreline length published by KHOA(Korea Hydrographic and Oceanographic Agency) and analyze the change in shoreline length according to the minimum vertex interval, target sites was selected and the grid overlap of the shoreline was determined. Based on the comparison results, minimum grid sizes and the minimum vertex interval can be determined by deriving a polynomial function that estimates minimum grid sizes for determining consistent shoreline lengths. By comparing public shoreline lengths with generalized shoreline lengths using various grid sizes and by analyzing the characteristics of the shoreline according to vertex intervals, the minimum vertex intervals required to achieve consistent shoreline lengths could be estimated. We suggest that the minimum vertex interval methodology by quantitative evaluation of the determined grid size may be useful in calculating consistent shoreline lengths. The proposed method by minimum vertex interval determination can help derive consistent shoreline lengths and increase the reliability of national shorelines.

White light scanner-based repeatability of 3-dimensional digitizing of silicon rubber abutment teeth impressions

  • Jeon, Jin-Hun;Lee, Kyung-Tak;Kim, Hae-Young;Kim, Ji-Hwan;Kim, Woong-Chul
    • The Journal of Advanced Prosthodontics
    • /
    • v.5 no.4
    • /
    • pp.452-456
    • /
    • 2013
  • PURPOSE. The aim of this study was to evaluate the repeatability of the digitizing of silicon rubber impressions of abutment teeth by using a white light scanner and compare differences in repeatability between different abutment teeth types. MATERIALS AND METHODS. Silicon rubber impressions of a canine, premolar, and molar tooth were each digitized 8 times using a white light scanner, and 3D surface models were created using the point clouds. The size of any discrepancy between each model and the corresponding reference tooth were measured, and the distribution of these values was analyzed by an inspection software (PowerInspect 2012, Delcamplc., Birmingham, UK). Absolute values of discrepancies were analyzed by the Kruskal-Wallis test and multiple comparisons (${\alpha}$=.05). RESULTS. The discrepancy between the impressions for the canine, premolar, and molar teeth were $6.3{\mu}m$ (95% confidence interval [CI], 5.4-7.2), $6.4{\mu}m$ (95% CI, 5.3-7.6), and $8.9{\mu}m$ (95% CI, 8.2-9.5), respectively. The discrepancy of the molar tooth impression was significantly higher than that of other tooth types. The largest variation (as mean [SD]) in discrepancies was seen in the premolar tooth impression scans: $26.7{\mu}m$ (95% CI, 19.7-33.8); followed by canine and molar teeth impressions, $16.3{\mu}m$ (95% CI, 15.3- 17.3), and $14.0{\mu}m$ (95% CI, 12.3-15.7), respectively. CONCLUSION. The repeatability of the digitizing abutment teeth's silicon rubber impressions by using a white light scanner was improved compared to that with a laser scanner, showing only a low mean discrepancy between $6.3{\mu}m$ and $8.9{\mu}m$, which was in an clinically acceptable range. Premolar impression with a long and narrow shape showed a significantly larger discrepancy than canine and molar impressions. Further work is needed to increase the digitizing performance of the white light scanner for deep and slender impressions.

The Effect of Transverse Magnetic field on Macrosegregation in vertical Bridgman Crystal Growth of Te doped InSb

  • Lee, Geun-Hee;Lee, Zin-Hyoung
    • Proceedings of the Korea Association of Crystal Growth Conference
    • /
    • 1996.06a
    • /
    • pp.522-522
    • /
    • 1996
  • An investigation of the effects of transverse magnetic field and Peltier effect on melt convection and macrosegregation in vertical Bridgman crystal grosth of Te doped InSb was been carried out by means of microstructure observation, Hall measurement, electrical resistivity measurement and X-ray analysis. Before the experiments, Interface stability, convective instability and suppression of convection by magnetic field were calculated theoretically. After doping 1018, 1019 cm-3 Te in InSb, the temperature of Bridgman furnace was set up at $650^{\circ}C$. The samples were grown in I.D. 11mm, 100mm high quartz tube. The velocity of growth was about 2${\mu}{\textrm}{m}$/sec. In order to obtain the suppression of convection by magnetic field in the middle of growth, 2-4KG magnetic field was set on the melt. For searching of the shape of solid-liquid interface and the actual velocity of crystal growth, let 2A current flow from solid to liquid for 1second every 50seconds repeatedly (Peltier effect). The grown InSb was polycrystal, and each grain was very sharp. There was no much difference between the sample with and without magnetic field at a point of view of microstructure. For the sample with Peltier effect, the Peltier marks(striation) were observed regularly as expected. Through these marks, it was found that the solid-liquid interface was flat and the actual growth velocity was about 1-2${\mu}{\textrm}{m}$/sec. On the ground of theoretical calculation, there is thermosolutal convection in the Te doped InSb melt without magnetic field in this growth condition. and if there is more than 1KG magnetic field, the convection is suppressed. Through this experiments, the effective distribution coefficients, koff, were 0.35 in the case of no magnetic field, and 0.45 when the magnetic field is 2KG, 0.7 at 4KG. It was found that the more magnetic field was applied, the more convection was suppressed. But there was some difference between the theoretical calculation and the experiment, the cause of the difference was thought due to the use of some approximated values in theoretical calculation. In addition to these results, the sample with Peltier effect showed unexpected result about the Te distribution in InSb. It looked like no convection and no macrosegregation. It was thought that the unexpected behavior was due to Peltier mark. that is, when the strong current flew the growing sample, the mark was formed by catching Te. As a result of the phenomena, the more Te containing thin layer was made. The layer ruled the Hall measurement. The values of resistivity and mobility of these samples were just a little than those of other reference. It was thought that the reason of this result was that these samples were due to polycrystal, that is, grain boundaries had an influence on this result.

  • PDF

A Study on the Geoid Height Determination by GPS (GPS에 의한 지오이드고(高) 결정(決定)에 관(關)한 연구(研究))

  • Kang, Joon Mook;Kim, Hong Jin;Song, Seung Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.5
    • /
    • pp.183-190
    • /
    • 1993
  • Determining accurate geoid height is very important because it is the basis of the 3-D coordinate transformation and determination of the orthometric height. In this study, for determining the geoid height, bi-linear method grounded on the interpolation method, GPS leveling and OSU91A was applied to the $5km{\times}5km$ area and $60km{\times}60km$ area in the latitude $N\;36^{\circ}{\sim}37^{\circ}$ and the longitude $E\;127^{\circ}{\sim}128^{\circ}$. The results obtained by these methods were compared with conventional leveling data. In case of bi-linear method, it was dependent upon the shape of interpolation network and undulation of ground. If leveling data are satisfactory, GPS leveling is more proper than any other method. Also, it is 62 cm that an average difference of GPS leveling and OSU91A. As a result, in order to determine more precise geoid height, the development of local geoid model is a pressing problem to be solved. The result of the research will provide reference data for settling the 3-D coordinate transformation, and it is expected that it will also be applied to determination of 3-D position.

  • PDF

Measurement and Estimation for the Clearance of Radioactive Waste Contaminated with Radioisotopes for Medical Application (의료용 방사성폐기물 자체처분을 위한 방사능 측정 및 평가)

  • Kim, Changbum;Park, MinSeok;Kim, Gi-Sub;Jung, Haijo;Jang, Seongjoo
    • Progress in Medical Physics
    • /
    • v.25 no.1
    • /
    • pp.8-14
    • /
    • 2014
  • The amounts of radioactive wastes to be disposed in the medical institute have been increased due to development of radiation diagnosis and therapy rapidly. They are produced mostly by the very short lived radioisotopes such as $^{18}F$ used in PET/CT, $^{99m}Tc$, $^{123}I$, $^{125}I$ and $^{201}Tl$, etc. IAEA proposed a criteria for the clearance level of waste which depends on the individual ($10{\mu}Sv/y$) and collective dose (1 man-Sv/y), and concentration of each nuclide (IAEA Safety Series No 111-P-1.1, 1992 and IAEA RS-G-1.7, 2004). Radioactive wastes of $^{18}F$, $^{99m}Tc$, $^{123}I$, $^{125}I$ and $^{201}TI$ in the several types of container like Marinelli beaker, vial and plastic, were collected to measure the concentration of the waste of each nuclide in accordance with IAEA criteria. The measurement method and procedure of determining specific activity of the wastes using gamma emitters like MCA, gamma counter and beta emitters were developed. For the efficiency calibration of the detectors, CRM (certified reference material) which has the same dimension and shape was provided by Korea Research Institute of Standards and Science (KRISS). Correction factor of the radioactivity decay was calculated based on the measurement results, and the consideration of mutual relation with theoretical equation. The result of this study will be proposed as ISO standard.