• Title/Summary/Keyword: 과탐지

Search Result 1,325, Processing Time 0.03 seconds

A methodology for Identification of an Air Cavity Underground Using its Natural Poles (물체의 고유 Pole을 이용한 지하 속의 빈 공간 식별 방안)

  • Lee, Woojin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.6
    • /
    • pp.566-572
    • /
    • 2021
  • A methodology for the identification and coordinates estimation of air cavities under urban ground or sandy soil using its natural poles and natural resonant frequencies is presented. The potential of this methodology was analyzed. Simulation models of PEC (Perfect Electric Conductor)s with various shapes and dimensions were developed using an EM (Electromagnetic) simulator. The Cauchy method was applied to the obtained EM scattering response of various objects from EM simulation models. The natural poles of objects corresponding to its instinct characterization were then extracted. Thus, a library of poles can be generated using their natural poles. The generated library of poles provided the possibility of identifying a target by comparing them with the computed natural poles from a target. The simulation models were made assuming that there is an air cavity under urban ground or sandy soil. The response of the desired target was extracted from the electromagnetic wave scattering data from its simulation model. The coordinates of the target were estimated using the time delay of the impulse response (peak of the impulse response) in the time domain. The MP (Matrix Pencil) method was applied to extract the natural poles of a target. Finally, a 0.2-m-diameter spherical air cavity underground could be estimated by comparing both the pole library of the objects and the calculated natural poles and the natural resonant frequency of the target. The computed location (depth) of a target showed an accuracy of approximately 84 to 93%.

The Study on Process of Illustrious Virtue Becoming an Issue in Horak debate (湖洛論爭) - Focused on Oiam(巍巖) Yi Gan(李柬)'s distiction between Mind(心) and temperament(氣質) (호락논쟁에서 명덕(明德)의 쟁점화 과정 연구 - 외암(巍巖) 이간(李柬)의 심(心)과 기질(氣質)의 분변(分辨)을 중심으로 -)

  • Bae, Je-seong
    • The Journal of Korean Philosophical History
    • /
    • no.54
    • /
    • pp.77-113
    • /
    • 2017
  • In late Chosen(朝鮮), the concept of illustrious virtue(明德) became an important issue of debate. However, previous studies did not focus on how the concept emerged as an issue. This paper aimed to explore the problem, and for this purpose, paid attention to Horak(湖洛) debate. Oiam(巍巖) Yi Gan(李柬), in the course of discussion with Namdang(南塘), finally argued that mind(心) clearly distinguishes from temperament(氣質). The goals of the claim were to clearly divide mind and temperament, and to emphasize mind's control of temperament. Through this, he wanted to reject the possibility of being affected by temperament in aroused state(未發). And he presented the concept of illustrious virtue as a critical evidence supporting his argument. He argued that because mind is same with illustrious virtue, it has a special status that essentially distinguished from the temperament, even if both mind and temperament are all material force(氣). This argument led to new discussion trend in the debate. it was to form a definition of the mind, based on defining the relationship between spiritual perception(虛靈知覺), temperament and illustrious virtue. The trend was reflected in the debate on 'Whether illustrious virtue is the same for everyone or varies from person to person(明德分殊)'. Through the process of analysis in this paper, we could detect a tendency that definition of mind has become an independent subject.

The Uncanny Valley Effect for Celebrity Faces and Celebrity-based Avatars (연예인 얼굴과 연예인 기반 아바타에서의 언캐니 밸리)

  • Jung, Na-ri;Lee, Min-ji;Choi, Hoon
    • Science of Emotion and Sensibility
    • /
    • v.25 no.1
    • /
    • pp.91-102
    • /
    • 2022
  • As virtual space activities become more common, human-virtual agents such as avatars are more frequently used instead of people, but the uncanny valley effect, in which people feel uncomfortable when they see artifacts that look similar to humans, is an obstacle. In this study, we explored the uncanny valley effect for celebrity avatars. We manipulated the degree of atypicality by adjusting the eye size in photos of celebrities, ordinary people, and their avatars and measured the intensity of the uncanny valley effect. As a result, the uncanny valley effect for celebrities and celebrity avatars appeared to be stronger than the effect for ordinary people. This result is consistent with previous findings that more robust facial representations are formed for familiar faces, making it easier to detect facial changes. However, with real faces of celebrities and ordinary people, as in previous studies, the higher the degree of atypicality, the greater the uncanny valley effect, but this result was not found for the avatar stimulus. This high degree of tolerance for atypicality in avatars seems to be caused by cartoon characters' tendency to have exaggerated facial features such as eyes, nose, and mouth. These results suggest that efforts to reduce the uncanny valley in the virtual space service using celebrity avatars are necessary.

Combining Conditional Generative Adversarial Network and Regression-based Calibration for Cloud Removal of Optical Imagery (광학 영상의 구름 제거를 위한 조건부 생성적 적대 신경망과 회귀 기반 보정의 결합)

  • Kwak, Geun-Ho;Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1357-1369
    • /
    • 2022
  • Cloud removal is an essential image processing step for any task requiring time-series optical images, such as vegetation monitoring and change detection. This paper presents a two-stage cloud removal method that combines conditional generative adversarial networks (cGANs) with regression-based calibration to construct a cloud-free time-series optical image set. In the first stage, the cGANs generate initial prediction results using quantitative relationships between optical and synthetic aperture radar images. In the second stage, the relationships between the predicted results and the actual values in non-cloud areas are first quantified via random forest-based regression modeling and then used to calibrate the cGAN-based prediction results. The potential of the proposed method was evaluated from a cloud removal experiment using Sentinel-2 and COSMO-SkyMed images in the rice field cultivation area of Gimje. The cGAN model could effectively predict the reflectance values in the cloud-contaminated rice fields where severe changes in physical surface conditions happened. Moreover, the regression-based calibration in the second stage could improve the prediction accuracy, compared with a regression-based cloud removal method using a supplementary image that is temporally distant from the target image. These experimental results indicate that the proposed method can be effectively applied to restore cloud-contaminated areas when cloud-free optical images are unavailable for environmental monitoring.

Fish Distribution Research Using Fishfinder at Fishery Area in the Cheongpyeong Reservoir (어군탐지기를 활용한 청평호 어업 구간의 어류 분포 연구)

  • Baek, Seung-Ho;Park, Sang-Hyeon;Song, Mi-Young;Kim, Jeong-Hui
    • Korean Journal of Ecology and Environment
    • /
    • v.54 no.4
    • /
    • pp.384-389
    • /
    • 2021
  • This study was conducted on October 23, 2020 at the Cheongpyeong Reservoir located in Seorakmyeon, Gapyeong-gun, Gyeonggi-do, and analyzed the horizontal and vertical distribution patterns of fish based on data obtained using fishfinder. The total surface area of fishfinder survey conducted was 782,853 m2, and where the water depth (WD) ranges from 10 m to 12 m is widest which 31.7% of total surface area. As a result of the heat map analysis, fish density was highest at right bank under the Gapyeong-bridge, but there was no specific pattern in horizontal distribution of fish. As a result of vertical distribution of fish analysis, 86.6% of fishes are observed at below 6 m of the fish depth (FD, distance from water surface to fish). As a result of the relative height (RH, water depth-distance from bottom to fish ratio) analysis, there was a tendency that fishes are distributed in near surface area more as the WD increased. This tendency could have various reasons such as water temperature gradient along the water depth, and further studies are required for detailed explanation.

Application of deep learning technique for battery lead tab welding error detection (배터리 리드탭 압흔 오류 검출의 딥러닝 기법 적용)

  • Kim, YunHo;Kim, ByeongMan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.2
    • /
    • pp.71-82
    • /
    • 2022
  • In order to replace the sampling tensile test of products produced in the tab welding process, which is one of the automotive battery manufacturing processes, vision inspectors are currently being developed and used. However, the vision inspection has the problem of inspection position error and the cost of improving it. In order to solve these problems, there are recent cases of applying deep learning technology. As one such case, this paper tries to examine the usefulness of applying Faster R-CNN, one of the deep learning technologies, to existing product inspection. The images acquired through the existing vision inspection machine are used as training data and trained using the Faster R-CNN ResNet101 V1 1024x1024 model. The results of the conventional vision test and Faster R-CNN test are compared and analyzed based on the test standards of 0% non-detection and 10% over-detection. The non-detection rate is 34.5% in the conventional vision test and 0% in the Faster R-CNN test. The over-detection rate is 100% in the conventional vision test and 6.9% in Faster R-CNN. From these results, it is confirmed that deep learning technology is very useful for detecting welding error of lead tabs in automobile batteries.

A Study on Transferring Cloud Dataset for Smoke Extraction Based on Deep Learning (딥러닝 기반 연기추출을 위한 구름 데이터셋의 전이학습에 대한 연구)

  • Kim, Jiyong;Kwak, Taehong;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.695-706
    • /
    • 2022
  • Medium and high-resolution optical satellites have proven their effectiveness in detecting wildfire areas. However, smoke plumes generated by wildfire scatter visible light incidents on the surface, thereby interrupting accurate monitoring of the area where wildfire occurs. Therefore, a technology to extract smoke in advance is required. Deep learning technology is expected to improve the accuracy of smoke extraction, but the lack of training datasets limits the application. However, for clouds, which have a similar property of scattering visible light, a large amount of training datasets has been accumulated. The purpose of this study is to develop a smoke extraction technique using deep learning, and the limits due to the lack of datasets were overcome by using a cloud dataset on transfer learning. To check the effectiveness of transfer learning, a small-scale smoke extraction training set was made, and the smoke extraction performance was compared before and after applying transfer learning using a public cloud dataset. As a result, not only the performance in the visible light wavelength band was enhanced but also in the near infrared (NIR) and short-wave infrared (SWIR). Through the results of this study, it is expected that the lack of datasets, which is a critical limit for using deep learning on smoke extraction, can be solved, and therefore, through the advancement of smoke extraction technology, it will be possible to present an advantage in monitoring wildfires.

Qualitative Verification of the LAMP Hail Prediction Using Surface and Radar Data (지상과 레이더 자료를 이용한 LAMP 우박 예측 성능의 정성적 검증)

  • Lee, Jae-yong;Lee, Seung-Jae;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.3
    • /
    • pp.179-189
    • /
    • 2022
  • Ice and water droplets rise and fall above the freezing altitude under the effects of strong updrafts and downdrafts, grow into hail, and then fall to the ground in the form of balls or irregular lumps of ice. Although such hail, which occurs in a local area within a short period of time, causes great damage to the agricultural and forestry sector, there is a paucity of domestic research toward predicting hail. The objective of this study was to introduce Land-Atmosphere Modeling Package (LAMP) hail prediction and measure its performance for 50 hail events that occurred from January 2020 to July 2021. In the study period, the frequency of occurrence was high during the spring and during afternoon hours. The average duration of hail was 15 min, and the average diameter of the hail was 1 cm. The results showed that LAMP predicted hail events with a detection rate of 70%. The hail prediction performance of LAMP deteriorated as the hail prediction time increased. The radar reflectivity of actual cases of hail indicated that the average maximum reflectivity was greater than 40 dBZ regardless of altitude. Approximately 50% of the hail events occurred when the reflectivity ranged from 30~50 dBZ. These results can be used to improve the hail prediction performance of LAMP in the future. Improved hail prediction performance through LAMP should lead to reduced economic losses caused by hail in the agricultural and forestry sector through preemptive measures such as net coverings.

Developing an Occupants Count Methodology in Buildings Using Virtual Lines of Interest in a Multi-Camera Network (다중 카메라 네트워크 가상의 관심선(Line of Interest)을 활용한 건물 내 재실자 인원 계수 방법론 개발)

  • Chun, Hwikyung;Park, Chanhyuk;Chi, Seokho;Roh, Myungil;Susilawati, Connie
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.667-674
    • /
    • 2023
  • In the event of a disaster occurring within a building, the prompt and efficient evacuation and rescue of occupants within the building becomes the foremost priority to minimize casualties. For the purpose of such rescue operations, it is essential to ascertain the distribution of individuals within the building. Nevertheless, there is a primary dependence on accounts provided by pertinent individuals like building proprietors or security staff, alongside fundamental data encompassing floor dimensions and maximum capacity. Consequently, accurate determination of the number of occupants within the building holds paramount significance in reducing uncertainties at the site and facilitating effective rescue activities during the golden hour. This research introduces a methodology employing computer vision algorithms to count the number of occupants within distinct building locations based on images captured by installed multiple CCTV cameras. The counting methodology consists of three stages: (1) establishing virtual Lines of Interest (LOI) for each camera to construct a multi-camera network environment, (2) detecting and tracking people within the monitoring area using deep learning, and (3) aggregating counts across the multi-camera network. The proposed methodology was validated through experiments conducted in a five-story building with the average accurary of 89.9% and the average MAE of 0.178 and RMSE of 0.339, and the advantages of using multiple cameras for occupant counting were explained. This paper showed the potential of the proposed methodology for more effective and timely disaster management through common surveillance systems by providing prompt occupancy information.

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.