• Title/Summary/Keyword: Color image detection

Search Result 717, Processing Time 0.026 seconds

A Feasibility Study on Application of a Deep Convolutional Neural Network for Automatic Rock Type Classification (자동 암종 분류를 위한 딥러닝 영상처리 기법의 적용성 검토 연구)

  • Pham, Chuyen;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.30 no.5
    • /
    • pp.462-472
    • /
    • 2020
  • Rock classification is fundamental discipline of exploring geological and geotechnical features in a site, which, however, may not be easy works because of high diversity of rock shape and color according to its origin, geological history and so on. With the great success of convolutional neural networks (CNN) in many different image-based classification tasks, there has been increasing interest in taking advantage of CNN to classify geological material. In this study, a feasibility of the deep CNN is investigated for automatically and accurately identifying rock types, focusing on the condition of various shapes and colors even in the same rock type. It can be further developed to a mobile application for assisting geologist in classifying rocks in fieldwork. The structure of CNN model used in this study is based on a deep residual neural network (ResNet), which is an ultra-deep CNN using in object detection and classification. The proposed CNN was trained on 10 typical rock types with an overall accuracy of 84% on the test set. The result demonstrates that the proposed approach is not only able to classify rock type using images, but also represents an improvement as taking highly diverse rock image dataset as input.

Detection of Aesthetic Measure from Stabilized Image and Video (정지영상과 동영상에서 미도의 추출)

  • Rhee, Yang-Won;Choi, Byeong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.33-38
    • /
    • 2012
  • An free-fall object is received only force of gravity. Movement that only accept gravity is free-fall movement, and a free-falling object is free falling body. In other words, free falling body is only freely falling objects under the influence of gravity, regardless of the initial state of objects movement. In this paper, we assume, ignoring the resistance of the air, and the free-fall acceleration by the height does not change within the range of the short distance in the vertical direction. Under these assumptions, we can know about time and maximum height to reach the peak point from jumping vertically upward direction, time and speed of the car return to the starting position, and time and speed when the car fall to the ground. It can be measured by jumping degree and risk of accident from car or motorcycle in telematics.

Current Status of KMTNet/DEEP-South Collaboration Research for Comets and Asteroids Research between SNU and KASI

  • BACH, Yoonsoo P.;YANG, Hongu;KWON, Yuna G.;LEE, Subin;KIM, Myung-Jin;CHOI, Young-Jun;Park, Jintae;ISHIGURO, Masateru;Moon, Hong-Kyu
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.2
    • /
    • pp.82.2-82.2
    • /
    • 2017
  • Korea Microlensing Telescope Network (KMTNet) is one of powerful tools for investigating primordial objects in the inner solar system in that it covers a large area of the sky ($2{\times}2$ degree2) with a high observational cadence. The Deep Ecliptic Patrol of the Southern sky (DEEP-South) survey has been scanning the southern sky using KMTNet for non-bulge time (45 full nights per year) [1] since 2015 for examining color, albedo, rotation, and shape of the solar system bodies. Since 2017 January, we have launched a new collaborative group between Korea Astronomy and Space Science Institute (KASI) and Seoul National University (SNU) with support from KASI to reinforce mutual collaboration among these institutes and further to enhance human resources development by utilizing the KMTNet/DEEP-South data. In particular, we focus on the detection of comets and asteroids spontaneously scanned in the DEEP-South for (1) investigating the secular changes in comet's activities and (2) analyzing precovery and recovery images of objects in the NASA's NEOWISE survey region. In this presentation, we will describe our scientific objectives and current status on using KMTNet data, which includes updating the accuracy of the world coordinate system (WCS) information, finding algorithm of solar system bodies in the image, and doing non-sidereal photometry.

  • PDF

Extracting Method The New Roads by Using High-resolution Aerial Orthophotos (고해상도 항공정사영상을 이용한 신설 도로 추출 방법에 관한 연구)

  • Lee, Kyeong Min;Go, Shin Young;Kim, Kyeong Min;Cho, Gi Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.3
    • /
    • pp.3-10
    • /
    • 2014
  • Digital maps are made by experts who digitize the data from aerial image and field survey. And the digital maps are updated every 2 years in National Geographic Information Institute. Conventional Digitizing methods take a lot of time and cost. And geographic information needs to be modified and updated appropriately as geographical features are changing rapidly. Therefore in this paper, we modify the digital map updates the road information for rapid high-resolution aerial orthophoto taken at different times were performed HSI color conversion. Road area of the cassification was performed the region growing methods. In addition, changes in the target area for analysis by applying the CVA technique to compare the changed road area by analyzing the accuracy of the proposed extraction.

Internet Based Tele-operation of the Autonomous Mobile Robot (인터넷을 통한 자율이동로봇 원격 제어)

  • Sim, Kwee-Bo;Byun, Kwang-Sub
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.6
    • /
    • pp.692-697
    • /
    • 2003
  • The researches on the Internet based tole-operation have received increased attention for the past few years. In this paper, we implement the Internet based tele-operating system. In order to transmit robustly the surroundings and control information of the robot, we make a data as a packet type. Also in order to transmit a very large image data, we use PEG compressive algorithm. The central problem in the Internet based tele-operation is the data transmission latency or data-loss. For this specific problem, we introduce an autonomous mobile robot with a 2-layer fuzzy controller. Also, we implement the color detection system and the robot can perceive the object. We verify the efficacy of the 2-layer fuzzy controller by applying it to a robot that is equipped with various input sensors. Because the 2-layer fuzzy controller can control robustly the robot with various inputs and outputs and the cost of control is low, we hope it will be applied to various sectors.

AN EXPERIMENTAL STUDY ON THE READABILITY OF THE DIGITAL IMAGES IN THE FURCAL BONE DEFECTS (디지털영상의 치근이개부 골손실 판독효과에 관한 실험적 연구)

  • Oh Bong-Hyeon;Hwang Eui-Hwan;Lee Sang-Rae
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.25 no.2
    • /
    • pp.363-373
    • /
    • 1995
  • The aim of this study was to evaluate and compare observer performance between conventional radiographs and their digitized images for the detection of bone loss in the bifurcation of mandiblar first molar. One dried human mandible with minimal periodontal bone loss around the first molar was selected and serially enlarged 17 step defects were prepared in the bifurcation area. The mandible was radiographed with exposure time of 0.12, 0.20, 0.25, 0.32, 0.40, 0.64 seconds, after each successive step in the preperation and all radiographs were digitized with IBM-PC/32 bit-Dx compatible, video camera (VM-S8200, Hitachi Co., Japan), and color monitor(Multisync 3D, NEC, Japan). Sylvia Image Capture Board for the ADC(analog to digital converter) was used. The obtained results were as follows: 1. In the conventional radiographs, the mean score of the readability was higher at the condition of exposure time with 0.32 second. Also, as the size of artificial lesion was increased, the readability of radiographs was elevated (P<0.05). 2. In the digital images, the mean score of the readability was higher at the condition of exposure time with 0.40 second. Also, as the size of artificial lesion was increased, the readability of digital images was elevated(P<0.05). 3. At the same exposure time, the mean scores of readibility were mostly higher in the digitized images. As the exposure time was increased, the digital images were superior to radiographs in readability. 4. As the size of lesion was changed, the digital images were superior to radiographs in detecting small lesion. 5. The coefficient of variation of mean score has no significant difference between digital images and radiographs.

  • PDF

Temporal Stereo Matching Using Occlusion Handling (폐색 영역을 고려한 시간 축 스테레오 매칭)

  • Baek, Eu-Tteum;Ho, Yo-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.99-105
    • /
    • 2017
  • Generally, stereo matching methods are used to estimate depth information based on color and spatial similarity. However, most depth estimation methods suffer from the occlusion region because occlusion regions cause inaccurate depth information. Moreover, they do not consider the temporal dimension when estimating the disparity. In this paper, we propose a temporal stereo matching method, considering occlusion and disregarding inaccurate temporal depth information. First, we apply a global stereo matching algorithm to estimate the depth information, we segment the image to occlusion and non-occlusion regions. After occlusion detection, we fill the occluded region with a reasonable disparity value that are obtained from neighboring pixels of the current pixel. Then, we apply a temporal disparity estimation method using the reliable information. Experimental results show that our method detects more accurate occlusion regions, compared to a conventional method. The proposed method increases the temporal consistency of estimated disparity maps and outperforms per-frame methods in noisy images.

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

A Systolic Array Structured Decision Feedback Equalizer based on Extended QR-RLS Algorithm (확장 QR-RLS 알고리즘을 이용한 시스토릭 어레이 구조의 결정 궤환 등화기)

  • Lee Won Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1518-1526
    • /
    • 2004
  • In this paper, an algorithm using wavelet transform for detecting a cut that is a radical scene transition point, and fade and dissolve that are gradual scene transition points is proposed. The conventional methods using wavelet transform for this purpose is using features in both spatial and frequency domain. But in the proposed algorithm, the color space of an input image is converted to YUV and then luminance component Y is transformed in frequency domain using 2-level lifting. Then, the histogram of only low frequency subband that may contain some spatial domain features is compared with the previous one. Edges obtained from other higher bands can be divided into global, semi-global and local regions and the histogram of each edge region is compared. The experimental results show the performance improvement of about 17% in recall and 18% in precision and also show a good performance in fade and dissolve detection.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.