• Title/Summary/Keyword: Measurement Algorithm

Search Result 2,900, Processing Time 0.026 seconds

Negative apparent resistivity in dipole-dipole electrical surveys (쌍극자-쌍극자 전기비저항 탐사에서 나타나는 음의 겉보기 비저항)

  • Jung, Hyun-Key;Min, Dong-Joo;Lee, Hyo-Sun;Oh, Seok-Hoon;Chung, Ho-Joon
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.33-40
    • /
    • 2009
  • In field surveys using the dipole-dipole electrical resistivity method, we often encounter negative apparent resistivity. The term 'negative apparent resistivity' refers to apparent resistivity values with the opposite sign to surrounding data in a pseudosection. Because these negative apparent resistivity values have been regarded as measurement errors, we have discarded the negative apparent resistivity data. Some people have even used negative apparent resistivity data in an inversion process, by taking absolute values of the data. Our field experiments lead us to believe that the main cause for negative apparent resistivity is neither measurement errors nor the influence of self potentials. Furthermore, we also believe that it is not caused by the effects of induced polarization. One possible cause for negative apparent resistivity is the subsurface geological structure. In this study, we provide some numerical examples showing that negative apparent resistivity can arise from geological structures. In numerical examples, we simulate field data using a 3D numerical modelling algorithm, and then extract 2D sections. Our numerical experiments demonstrate that the negative apparent resistivity can be caused by geological structures modelled by U-shaped and crescent-shaped conductive models. Negative apparent resistivity usually occurs when potentials increase with distance from the current electrodes. By plotting the voltage-electrode position curves, we could confirm that when the voltage curves intersect each other, negative apparent resistivity appears. These numerical examples suggest that when we observe negative apparent resistivity in field surveys, we should consider the possibility that the negative apparent resistivity has been caused by geological structure.

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

Evaluation for applicability of river depth measurement method depending on vegetation effect using drone-based spatial-temporal hyperspectral image (드론기반 시공간 초분광영상을 활용한 식생유무에 따른 하천 수심산정 기법 적용성 검토)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.235-243
    • /
    • 2023
  • Due to the revision of the River Act and the enactment of the Act on the Investigation, Planning, and Management of Water Resources, a regular bed change survey has become mandatory and a system is being prepared such that local governments can manage water resources in a planned manner. Since the topography of a bed cannot be measured directly, it is indirectly measured via contact-type depth measurements such as level survey or using an echo sounder, which features a low spatial resolution and does not allow continuous surveying owing to constraints in data acquisition. Therefore, a depth measurement method using remote sensing-LiDAR or hyperspectral imaging-has recently been developed, which allows a wider area survey than the contact-type method as it acquires hyperspectral images from a lightweight hyperspectral sensor mounted on a frequently operating drone and by applying the optimal bandwidth ratio search algorithm to estimate the depth. In the existing hyperspectral remote sensing technique, specific physical quantities are analyzed after matching the hyperspectral image acquired by the drone's path to the image of a surface unit. Previous studies focus primarily on the application of this technology to measure the bathymetry of sandy rivers, whereas bed materials are rarely evaluated. In this study, the existing hyperspectral image-based water depth estimation technique is applied to rivers with vegetation, whereas spatio-temporal hyperspectral imaging and cross-sectional hyperspectral imaging are performed for two cases in the same area before and after vegetation is removed. The result shows that the water depth estimation in the absence of vegetation is more accurate, and in the presence of vegetation, the water depth is estimated by recognizing the height of vegetation as the bottom. In addition, highly accurate water depth estimation is achieved not only in conventional cross-sectional hyperspectral imaging, but also in spatio-temporal hyperspectral imaging. As such, the possibility of monitoring bed fluctuations (water depth fluctuation) using spatio-temporal hyperspectral imaging is confirmed.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

A Study on Usefulness of Clinical Application of Metal Artifact Reduction Algorithm in Radiotherapy (방사선치료 시 Metal artifact reduction Algorithm의 임상적용 유용성평가)

  • Park, Ja Ram;Kim, Min Su;Kim, Jeong Mi;Chung, Hyeon Suk;Lee, Chung Hwan;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.2
    • /
    • pp.9-17
    • /
    • 2017
  • Purpose: The tissue description and electron density indicated by the Computed Tomography(CT) number (also known as Hounsfield Unit) in radiotherapy are important in ensuring the accuracy of CT-based computerized radiotherapy planning. The internal metal implants, however, not only reduce the accuracy of CT number but also introduce uncertainty into tissue description, leading to development of many clinical algorithms for reducing metal artifacts. The purpose of this study was, therefore, to investigate the accuracy and the clinical applicability by analyzing date from SMART MAR (GE) used in our institution. Methode: and material: For assessment of images, the original images were obtained after forming ROIs with identical volumes by using CIRS ED phantom and inserting rods of six tissues and then non-SMART MAR and SMART MAR images were obtained and compared in terms of CT number and SD value. For determination of the difference in dose by the changes in CT number due to metal artifacts, the original images were obtained by forming PTV at two sites of CIRS ED phantom CT images with Computerized Treatment Planning (CTP system), the identical treatment plans were established for non-SMART MAR and SMART MAR images by obtaining unilateral and bilateral titanium insertion images, and mean doses, Homogeneity Index(HI), and Conformity Index(CI) for both PTVs were compared. The absorbed doses at both sites were measured by calculating the dose conversion constant (cCy/nC) from ylinder acrylic phantom, 0.125cc ionchamber, and electrometer and obtaining non-SMART MAR and SMART MAR images from images resulting from insertions of unilateral and bilateral titanium rods, and compared with point doses from CTP. Result: The results of image assessment showed that the CT number of SMART MAR images compared to those of non-SMART MAR images were more close to those of original images, and the SD decreased more in SMART compared to non-SMART ones. The results of dose determinations showed that the mean doses, HI and CI of non-SMART MAR images compared to those of SMART MAR images were more close to those of original images, however the differences did not reach statistical significance. The results of absorbed dose measurement showed that the difference between actual absorbed dose and point dose on CTP in absorbed dose were 2.69 and 3.63 % in non-SMRT MAR images, however decreased to 0.56 and 0.68 %, respectively in SMART MAR images. Conclusion: The application of SMART MAR in CT images from patients with metal implants improved quality of images, being demonstrated by improvement in accuracy of CT number and decrease in SD, therefore it is considered that this method is useful in dose calculation and forming contour between tumor and normal tissues.

  • PDF

Calculation and Monthly Characteristics of Satellite-based Heat Flux Over the Ocean Around the Korea Peninsula (한반도 주변 해양에서 위성 기반 열플럭스 산출 및 월별 특성 분석)

  • Kim, Jaemin;Lee, Yun Gon;Park, Jun Dong;Sohn, Eun Ha;Jang, Jae-Dong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.3
    • /
    • pp.519-533
    • /
    • 2018
  • The sensible heat flux (SHF)and latent heat flux (LHF) over Korean Peninsula ocean during recent 4 years were calculated using Coupled Ocean-Atmosphere Response Experiment (COARE) 3.5 bulk algorithm and satellite-based atmospheric-ocean variables. Among the four input variables (10-m wind speed; U, sea surface temperature; $T_s$, air temperature; $T_a$, and air humidity; $Q_a$) required for heat flux calculation, Ta and $Q_a$, which are not observed directly by satellites, were estimated from empirical relations developed using satellite-based columnar atmospheric water vapor (W) and $T_s$. The estimated satellite-based $T_a$ and $Q_a$ show high correlation coefficients above 0.96 with the buoy observations. The temporal and spatial variability of monthly ocean heat fluxes were analyzed for the Korean Peninsula ocean. The SHF showed low values of $20W/m^2$ over the entire areas from March to August. Particularly, in July, SHF from the atmosphere to the ocean, which is less than $0W/m^2$, has been shown in some areas. The SHF gradually increased from September and reached the maximum value in December. Similarly, The LHF showed low values of $40W/m^2$ from April to July, but it increased rapidly from autumn and was highest in December. The analysis of monthly characteristics of the meteorological variables affecting the heat fluxes revealed that the variation in differences of temperature and humidity between air and sea modulate the SHF and LHF, respectively. In addition, as the sensitivity of SHF and LHF to U increase in winter, it contributed to the highest values of ocean heat fluxes in this season.

Automatic Fracture Detection in CT Scan Images of Rocks Using Modified Faster R-CNN Deep-Learning Algorithm with Rotated Bounding Box (회전 경계박스 기능의 변형 FASTER R-CNN 딥러닝 알고리즘을 이용한 암석 CT 영상 내 자동 균열 탐지)

  • Pham, Chuyen;Zhuang, Li;Yeom, Sun;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.31 no.5
    • /
    • pp.374-384
    • /
    • 2021
  • In this study, we propose a new approach for automatic fracture detection in CT scan images of rock specimens. This approach is built on top of two-stage object detection deep learning algorithm called Faster R-CNN with a major modification of using rotated bounding box. The use of rotated bounding box plays a key role in the future work to overcome several inherent difficulties of fracture segmentation relating to the heterogeneity of uninterested background (i.e., minerals) and the variation in size and shape of fracture. Comparing to the commonly used bounding box (i.e., axis-align bounding box), rotated bounding box shows a greater adaptability to fit with the elongated shape of fracture, such that minimizing the ratio of background within the bounding box. Besides, an additional benefit of rotated bounding box is that it can provide relative information on the orientation and length of fracture without the further segmentation and measurement step. To validate the applicability of the proposed approach, we train and test our approach with a number of CT image sets of fractured granite specimens with highly heterogeneous background and other rocks such as sandstone and shale. The result demonstrates that our approach can lead to the encouraging results on fracture detection with the mean average precision (mAP) up to 0.89 and also outperform the conventional approach in terms of background-to-object ratio within the bounding box.

Development and assessment of pre-release discharge technology for response to flood on deteriorated reservoirs dealing with abnormal weather events (이상기후대비 노후저수지 홍수 대응을 위한 사전방류 기술개발 및 평가)

  • Moon, Soojin;Jeong, Changsam;Choi, Byounghan;Kim, Seungwook;Jang, Daewon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.775-784
    • /
    • 2023
  • With the increasing trend of extreme rainfall that exceeds the design frequency of man-made structures due to extreme weather, it is necessary to review the safety of agricultural reservoirs designed in the past. However, there are no local government-managed reservoirs (13,685) that can be discharged in an emergency, except for reservoirs over a certain size under the jurisdiction of the Korea Rural Affairs Corporation. In this case, it is important to quickly deploy a mobile siphon to the site for preliminary discharge, and this study evaluated the applicability of a mobile siphon with a diameter of 200 mm, a minimum water level difference of 6 m, 420 (m2/h), and 10,000 (m2/day), which can perform both preliminary and emergency discharge functions, to the Yugum Reservoir in Gyeongju City. The test bed, Yugum Reservoir, is a facility that was completed in 1945 and has been in use for about 78 years. According to the hydrological stability analysis, the lowest height of the current dam crest section is 27.15 (EL.m), which is 0.29m lower than the reviewed flood level of 27.44 (EL.m), indicating that there is a possibility of lunar flow through the embankment, and the headroom is insufficient by 1.72 m, so it was reviewed as not securing hydrological safety. The water level-volume curve was arbitrarily derived because it was difficult to clearly establish the water level-flow relationship curve of the reservoir since the water level-flow measurement was not carried out regularly, and based on the derived curve, the algorithm for operating small and medium-sized old reservoirs was developed to consider the pre-discharge time, the amount of spillway discharge, and to predict the reservoir lunar flow time according to the flood volume by frequency, thereby securing evacuation time in advance and reducing the risk of collapse. Based on one row of 200 mm diameter mobile siphons, the optimal pre-discharge time to secure evacuation time (about 1 hour) while maintaining 80% of the upper limit water level (about 30,000 m2) during a 30-year flood was analyzed to be 12 hours earlier. If the pre-discharge technology utilizing siphons for small and medium-sized old reservoirs and the algorithm for reservoir operation are implemented in advance in case of abnormal weather and the decision-making of managers is supported, it is possible to secure the safety of residents in the risk area of reservoir collapse, resolve the anxiety of residents through the establishment of a support system for evacuating residents, and reduce risk factors by providing risk avoidance measures in the event of a reservoir risk situation.

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.