• Title/Summary/Keyword: Image processing algorithms

Search Result 898, Processing Time 0.029 seconds

Real-time Hand Region Detection based on Cascade using Depth Information (깊이정보를 이용한 케스케이드 방식의 실시간 손 영역 검출)

  • Joo, Sung Il;Weon, Sun Hee;Choi, Hyung Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.713-722
    • /
    • 2013
  • This paper proposes a method of using depth information to detect the hand region in real-time based on the cascade method. In order to ensure stable and speedy detection of the hand region even under conditions of lighting changes in the test environment, this study uses only features based on depth information, and proposes a method of detecting the hand region by means of a classifier that uses boosting and cascading methods. First, in order to extract features using only depth information, we calculate the difference between the depth value at the center of the input image and the average of depth value within the segmented block, and to ensure that hand regions of all sizes will be detected, we use the central depth value and the second order linear model to predict the size of the hand region. The cascade method is applied to implement training and recognition by extracting features from the hand region. The classifier proposed in this paper maintains accuracy and enhances speed by composing each stage into a single weak classifier and obtaining the threshold value that satisfies the detection rate while exhibiting the lowest error rate to perform over-fitting training. The trained classifier is used to classify the hand region, and detects the final hand region in the final merger stage. Lastly, to verify performance, we perform quantitative and qualitative comparative analyses with various conventional AdaBoost algorithms to confirm the efficiency of the hand region detection algorithm proposed in this paper.

Development of Open Set Recognition-based Multiple Damage Recognition Model for Bridge Structure Damage Detection (교량 구조물 손상탐지를 위한 Open Set Recognition 기반 다중손상 인식 모델 개발)

  • Kim, Young-Nam;Cho, Jun-Sang;Kim, Jun-Kyeong;Kim, Moon-Hyun;Kim, Jin-Pyung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.1
    • /
    • pp.117-126
    • /
    • 2022
  • Currently, the number of bridge structures in Korea is continuously increasing and enlarged, and the number of old bridges that have been in service for more than 30 years is also steadily increasing. Bridge aging is being treated as a serious social problem not only in Korea but also around the world, and the existing manpower-centered inspection method is revealing its limitations. Recently, various bridge damage detection studies using deep learning-based image processing algorithms have been conducted, but due to the limitations of the bridge damage data set, most of the bridge damage detection studies are mainly limited to one type of crack, which is also based on a close set classification model. As a detection method, when applied to an actual bridge image, a serious misrecognition problem may occur due to input images of an unknown class such as a background or other objects. In this study, five types of bridge damage including crack were defined and a data set was built, trained as a deep learning model, and an open set recognition-based bridge multiple damage recognition model applied with OpenMax algorithm was constructed. And after performing classification and recognition performance evaluation on the open set including untrained images, the results were analyzed.

Motion Vector Based Overlay Metrology Algorithm for Wafer Alignment (웨이퍼 정렬을 위한 움직임 벡터 기반의 오버레이 계측 알고리즘 )

  • Lee Hyun Chul;Woo Ho Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.3
    • /
    • pp.141-148
    • /
    • 2023
  • Accurate overlay metrology is essential to achieve high yields of semiconductor products. Overlay metrology performance is greatly affected by overlay target design and measurement method. Therefore, in order to improve the performance of the overlay target, measurement methods applicable to various targets are required. In this study, we propose a new algorithm that can measure image-based overlay. The proposed measurement algorithm can estimate the sub-pixel position by using a motion vector. The motion vector may estimate the position of the sub-pixel unit by applying a quadratic equation model through polynomial expansion using pixels in the selected region. The measurement method using the motion vector can calculate the stacking error in all directions at once, unlike the existing correlation coefficient-based measurement method that calculates the stacking error on the X-axis and the Y-axis, respectively. Therefore, more accurate overlay measurement is possible by reflecting the relationship between the X-axis and the Y-axis. However, since the amount of computation is increased compared to the existing correlation coefficient-based algorithm, more computation time may be required. The purpose of this study is not to present an algorithm improved over the existing method, but to suggest a direction for a new measurement method. Through the experimental results, it was confirmed that measurement results similar to those of the existing method could be obtained.

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.

System Implementation for Generating High Quality Digital Holographic Video using Vertical Rig based on Depth+RGB Camera (Depth+RGB 카메라 기반의 수직 리그를 이용한 고화질 디지털 홀로그래픽 비디오 생성 시스템의 구)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.964-975
    • /
    • 2012
  • Recently the attention on digital hologram that is regarded as to be the final goal of the 3-dimensional video technology has been increased. A digital hologram can be generated with a depth and a RGB image. We proposed a new system to capture RGB and depth images and to convert them to digital holograms. First a new cold mirror was designed and produced. It has the different transmittance ratio against various wave length and can provide the same view and focal point to the cameras. After correcting various distortions with the camera system, the different resolution between depth and RGB images was adjusted. The interested object was extracted by using the depth information. Finally a digital hologram was generated with the computer generated hologram (CGH) algorithm. All algorithms were implemented with C/C++/CUDA and integrated in LabView environment. A hologram was calculated in the general-purpose computing on graphics processing unit (GPGPU) for high-speed operation. We identified that the visual quality of the hologram produced by the proposed system is better than the previous one.

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

A Dynamic Load Balancing Scheme based on Host Load Information in a Wireless Internet Proxy Server Cluster (무선 인터넷 프록시 서버 클러스터에서 호스트 부하 정보에 기반한 동적 부하 분산 방안)

  • Kwak Hu-Keun;Chung Kyu-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.33 no.3
    • /
    • pp.231-246
    • /
    • 2006
  • A server load balancer is used to accept and distribute client requests to one of servers in a wireless internet proxy server cluster. LVS(Linux Virtual Server), a software based server load balancer, can support several load balancing algorithms where client requests are distributed to servers in a round robin way, in a hashing-based way or in a way to assign first to the server with the least number of its concurrent connections to LVS. An improved load balancing algorithm to consider server performance was proposed where they check upper and lower limits of concurrent connection numbers to be allowed within each maximum server performance in advance and apply the static limits to load balancing. However, they do not apply run-time server load information dynamically to load balancing. In this paper, we propose a dynamic load balancing scheme where the load balancer keeps each server CPU load information at run time and assigns a new client request first to the server with the lowest load. Using a cluster consisting of 16 PCs, we performed experiments with static content(image and HTML). Compared to the existing schemes, experimental results show performance improvement in the cases of client requests requiring CPU-intensive processing and a cluster consisting of servers with difference performance.

Intelligent Railway Detection Algorithm Fusing Image Processing and Deep Learning for the Prevent of Unusual Events (철도 궤도의 이상상황 예방을 위한 영상처리와 딥러닝을 융합한 지능형 철도 레일 탐지 알고리즘)

  • Jung, Ju-ho;Kim, Da-hyeon;Kim, Chul-su;Oh, Ryum-duck;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2020
  • With the advent of high-speed railways, railways are one of the most frequently used means of transportation at home and abroad. In addition, in terms of environment, carbon dioxide emissions are lower and energy efficiency is higher than other transportation. As the interest in railways increases, the issue related to railway safety is one of the important concerns. Among them, visual abnormalities occur when various obstacles such as animals and people suddenly appear in front of the railroad. To prevent these accidents, detecting rail tracks is one of the areas that must basically be detected. Images can be collected through cameras installed on railways, and the method of detecting railway rails has a traditional method and a method using deep learning algorithm. The traditional method is difficult to detect accurately due to the various noise around the rail, and using the deep learning algorithm, it can detect accurately, and it combines the two algorithms to detect the exact rail. The proposed algorithm determines the accuracy of railway rail detection based on the data collected.