• Title/Summary/Keyword: Image Extraction

Search Result 2,607, Processing Time 0.032 seconds

A Basic Study on the Reduction of Illuminated Reflection for improving the Safety of Self-driving at Night (야간 자율주행 안전성 향상을 위한 조명반사광 감소에 관한 기초연구)

  • Park, Chang min
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.60-68
    • /
    • 2022
  • As AI-technology develops, interest in the safety of autonomous driving is increasing. Recently, autonomous vehicles have been increasing, but efforts to solve side effects have been sluggish. In particular, night autonomous vehicles have more problems. This is because the probability of accidents is higher in the night driving environment than in the day environment. There are more factors to consider for self-driving at night. Among these factors, reflection of light or reflected light of lighting may be a fundamental cause of night accidents. Therefore, this study proposes method to reduce accidents and improve safety by reducing reflected light generated by the headlights of opposite vehicles or various surrounding light that appear as an important problem in night autonomous vehicles. Therefore, first, in an image obtained by a sensor of a night autonomous vehicle, illumination reflected light is extracted using reflected light characteristic information, and a color of each pixel using a reflection coefficient is found to reduce a special area generated by geometric characteristics. In addition, we find a new area using only the brightness component of the specular area, define it as Illuminated Reflection Light (IRL), and finally present a method to reduce it. Although the illumination reflection light could not be completely reduce, generally satisfactory results could be obtained. Therefore, it is believed that the proposed study can reduce casualties by solving the problems of night autonomous driving and improving safety.

Vector-Based Data Augmentation and Network Learning for Efficient Crack Data Collection (효율적인 균열 데이터 수집을 위한 벡터 기반 데이터 증강과 네트워크 학습)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.1-9
    • /
    • 2022
  • In this paper, we propose a vector-based augmentation technique that can generate data required for crack detection and a ConvNet(Convolutional Neural Network) technique that can learn it. Detecting cracks quickly and accurately is an important technology to prevent building collapse and fall accidents in advance. In order to solve this problem with artificial intelligence, it is essential to obtain a large amount of data, but it is difficult to obtain a large amount of crack data because the situation for obtaining an actual crack image is mostly dangerous. This problem of database construction can be alleviated with elastic distortion, which increases the amount of data by applying deformation to a specific artificial part. In this paper, the improved crack pattern results are modeled using ConvNet. Rather than elastic distortion, our method can obtain results similar to the actual crack pattern. By designing the crack data augmentation based on a vector, rather than the pixel unit used in general data augmentation, excellent results can be obtained in terms of the amount of crack change. As a result, in this paper, even though a small number of crack data were used as input, a crack database can be efficiently constructed by generating various crack directions and patterns.

Dental Surgery Simulation Using Haptic Feedback Device (햅틱 피드백 장치를 이용한 치과 수술 시뮬레이션)

  • Yoon Sang Yeun;Sung Su Kyung;Shin Byeong Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.275-284
    • /
    • 2023
  • Virtual reality simulations are used for education and training in various fields, and are especially widely used in the medical field recently. The education/training simulator consists of tactile/force feedback generation and image/sound output hardware that provides a sense similar to a doctor's treatment of a real patient using real surgical tools, and software that produces realistic images and tactile feedback. Existing simulators are complicated and expensive because they have to use various types of hardware to simulate various surgical instruments used during surgery. In this paper, we propose a dental surgical simulation system using a force feedback device and a morphable haptic controller. Haptic hardware determines whether the surgical tool collides with the surgical site and provides a sense of resistance and vibration. In particular, haptic controllers that can be deformed, such as length changes and bending, can express various senses felt depending on the shape of various surgical tools. When the user manipulates the haptic feedback device, events such as movement of the haptic feedback device or button clicks are delivered to the simulation system, resulting in interaction between dental surgical tools and oral internal models, and thus haptic feedback is delivered to the haptic feedback device. Using these basic techniques, we provide a realistic training experience of impacted wisdom tooth extraction surgery, a representative dental surgery technique, in a virtual environment represented by sophisticated three-dimensional models.

Method of Biological Information Analysis Based-on Object Contextual (대상객체 맥락 기반 생체정보 분석방법)

  • Kim, Kyung-jun;Kim, Ju-yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.41-43
    • /
    • 2022
  • In order to prevent and block infectious diseases caused by the recent COVID-19 pandemic, non-contact biometric information acquisition and analysis technology is attracting attention. The invasive and attached biometric information acquisition method accurately has the advantage of measuring biometric information, but has a risk of increasing contagious diseases due to the close contact. To solve these problems, the non-contact method of extracting biometric information such as human fingerprints, faces, iris, veins, voice, and signatures with automated devices is increasing in various industries as data processing speed increases and recognition accuracy increases. However, although the accuracy of the non-contact biometric data acquisition technology is improved, the non-contact method is greatly influenced by the surrounding environment of the object to be measured, which is resulting in distortion of measurement information and poor accuracy. In this paper, we propose a context-based bio-signal modeling technique for the interpretation of personalized information (image, signal, etc.) for bio-information analysis. Context-based biometric information modeling techniques present a model that considers contextual and user information in biometric information measurement in order to improve performance. The proposed model analyzes signal information based on the feature probability distribution through context-based signal analysis that can maximize the predicted value probability.

  • PDF

Optimization-based Deep Learning Model to Localize L3 Slice in Whole Body Computerized Tomography Images (컴퓨터 단층촬영 영상에서 3번 요추부 슬라이스 검출을 위한 최적화 기반 딥러닝 모델)

  • Seongwon Chae;Jae-Hyun Jo;Ye-Eun Park;Jin-Hyoung, Jeong;Sung Jin Kim;Ahnryul Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.331-337
    • /
    • 2023
  • In this paper, we propose a deep learning model to detect lumbar 3 (L3) CT images to determine the occurrence and degree of sarcopenia. In addition, we would like to propose an optimization technique that uses oversampling ratio and class weight as design parameters to address the problem of performance degradation due to data imbalance between L3 level and non-L3 level portions of CT data. In order to train and test the model, a total of 150 whole-body CT images of 104 prostate cancer patients and 46 bladder cancer patients who visited Gangneung Asan Medical Center were used. The deep learning model used ResNet50, and the design parameters of the optimization technique were selected as six types of model hyperparameters, data augmentation ratio, and class weight. It was confirmed that the proposed optimization-based L3 level extraction model reduced the median L3 error by about 1.0 slices compared to the control model (a model that optimized only 5 types of hyperparameters). Through the results of this study, accurate L3 slice detection was possible, and additionally, we were able to present the possibility of effectively solving the data imbalance problem through oversampling through data augmentation and class weight adjustment.

Extraction and Utilization of DEM based on UAV Photogrammetry for Flood Trace Investigation and Flood Prediction (침수흔적조사를 위한 UAV 사진측량 기반 DEM의 추출 및 활용)

  • Jung-Sik PARK;Yong-Jin CHOI;Jin-Duk LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.4
    • /
    • pp.237-250
    • /
    • 2023
  • Orthophotos and DEMs were generated by UAV-based aerial photogrammetry and an attempt was made to apply them to detailed investigations for the production of flood traces. The cultivated area located in Goa-eup, Gumi, where the embankment collapsed and inundated inundation occurred due to the impact of 6th Typhoon Sanba in 2012, was selected as rhe target area. To obtain optimal accuracy of UAV photogrammetry performance, the UAV images were taken under the optimal placement of 19 GCPs and then point cloud, DEM, and orthoimages were generated through image processing using Pix4Dmapper software. After applying CloudCompare's CSF Filtering to separate the point cloud into ground elements and non-ground elements, a finally corrected DEM was created using only non-ground elements in GRASS GIS software. The flood level and flood depth data extracted from the final generated DEM were compared and presented with the flood level and flood depth data from existing data as of 2012 provided through the public data portal site of the Korea Land and Geospatial Informatix Corporation(LX).

Prediction of Patient Management in COVID-19 Using Deep Learning-Based Fully Automated Extraction of Cardiothoracic CT Metrics and Laboratory Findings

  • Thomas Weikert;Saikiran Rapaka;Sasa Grbic;Thomas Re;Shikha Chaganti;David J. Winkel;Constantin Anastasopoulos;Tilo Niemann;Benedikt J. Wiggli;Jens Bremerich;Raphael Twerenbold;Gregor Sommer;Dorin Comaniciu;Alexander W. Sauter
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.994-1004
    • /
    • 2021
  • Objective: To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. Materials and Methods: All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. Results: While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88). Conclusion: Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.

Development of a Prototype System for Aquaculture Facility Auto Detection Using KOMPSAT-3 Satellite Imagery (KOMPSAT-3 위성영상 기반 양식시설물 자동 검출 프로토타입 시스템 개발)

  • KIM, Do-Ryeong;KIM, Hyeong-Hun;KIM, Woo-Hyeon;RYU, Dong-Ha;GANG, Su-Myung;CHOUNG, Yun-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.63-75
    • /
    • 2016
  • Aquaculture has historically delivered marine products because the country is surrounded by ocean on three sides. Surveys on production have been conducted recently to systematically manage aquaculture facilities. Based on survey results, pricing controls on marine products has been implemented to stabilize local fishery resources and to ensure minimum income for fishermen. Such surveys on aquaculture facilities depend on manual digitization of aerial photographs each year. These surveys that incorporate manual digitization using high-resolution aerial photographs can accurately evaluate aquaculture with the knowledge of experts, who are aware of each aquaculture facility's characteristics and deployment of those facilities. However, using aerial photographs has monetary and time limitations for monitoring aquaculture resources with different life cycles, and also requires a number of experts. Therefore, in this study, we investigated an automatic prototype system for detecting boundary information and monitoring aquaculture facilities based on satellite images. KOMPSAT-3 (13 Scene), a local high-resolution satellite provided the satellite imagery collected between October and April, a time period in which many aquaculture facilities were operating. The ANN classification method was used for automatic detecting such as cage, longline and buoy type. Furthermore, shape files were generated using a digitizing image processing method that incorporates polygon generation techniques. In this study, our newly developed prototype method detected aquaculture facilities at a rate of 93%. The suggested method overcomes the limits of existing monitoring method using aerial photographs, but also assists experts in detecting aquaculture facilities. Aquaculture facility detection systems must be developed in the future through application of image processing techniques and classification of aquaculture facilities. Such systems will assist in related decision-making through aquaculture facility monitoring.

Development of an Automatic 3D Coregistration Technique of Brain PET and MR Images (뇌 PET과 MR 영상의 자동화된 3차원적 합성기법 개발)

  • Lee, Jae-Sung;Kwark, Cheol-Eun;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Park, Kwang-Suk
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.5
    • /
    • pp.414-424
    • /
    • 1998
  • Purpose: Cross-modality coregistration of positron emission tomography (PET) and magnetic resonance imaging (MR) could enhance the clinical information. In this study we propose a refined technique to improve the robustness of registration, and to implement more realistic visualization of the coregistered images. Materials and Methods: Using the sinogram of PET emission scan, we extracted the robust head boundary and used boundary-enhanced PET to coregister PET with MR. The pixels having 10% of maximum pixel value were considered as the boundary of sinogram. Boundary pixel values were exchanged with maximum value of sinogram. One hundred eighty boundary points were extracted at intervals of about 2 degree using simple threshold method from each slice of MR images. Best affined transformation between the two point sets was performed using least square fitting which should minimize the sum of Euclidean distance between the point sets. We reduced calculation time using pre-defined distance map. Finally we developed an automatic coregistration program using this boundary detection and surface matching technique. We designed a new weighted normalization technique to display the coregistered PET and MR images simultaneously. Results: Using our newly developed method, robust extraction of head boundary was possible and spatial registration was successfully performed. Mean displacement error was less than 2.0 mm. In visualization of coregistered images using weighted normalization method, structures shown in MR image could be realistically represented. Conclusion: Our refined technique could practically enhance the performance of automated three dimensional coregistration.

  • PDF

A Measures to Implements the Conservation and Management of Traditional Landscape Architecture using Aerial Photogrammetry and 3D Scanning (전통조경 보존·관리를 위한 3차원 공간정보 적용방안)

  • Kim, Jae-Ung
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.38 no.1
    • /
    • pp.77-84
    • /
    • 2020
  • This study is apply 3D spatial information per traditional landscape space by comparing spatial information data created using a small drone and 3D scanner used for 3D spatial information construction for efficient preservation and management of traditional landscaping space composed of areas such as scenic sites and traditional landscape architectures. The analysis results are as follows. First, aerial photogrammetry data is less accurate than 3D scanners, but it was confirmed to be more suitable for monitoring landscape changes by reading RGB images than 3D scanners by texture mapping using digital data in constructing orthographic image data. Second, the orthographic image data constructed by aerial photogrammetry in a traditional landscaping space consisting of a fixed area, such as Gwanghalluwon Garden, produced visually accurate and precise results. However, as a result of the data extraction, data for trees, which is one of the elements that make up the traditional landscaping, was not extracted, so it was determined that 3D scanning and aerial surveying had to be performed in parallel, especially in areas where trees were densely populated. Third, The surrounding trees in Soswaewon Garden caused many errors in 3D spatial information data including topographic data. It was analyzed that it is preferable to use 3D scanning technology for precise measurement rather than aerial photogrammetry because buildings, landscaping facilities and trees are dense in a relatively small space. When 3D spatial information construction data for a traditional landscaping space composed of area using a small drone and a 3D scanner free from temporal and spatial constraints and compared the data was compared, the aerial photogrammetry is effective for large site such as Hahoe Village, Gyeongju and construction of a 3D space using a 3D scanner is effective for traditional garden such as Soswaewon Garden.