• Title/Summary/Keyword: edge-detection algorithm

Search Result 679, Processing Time 0.03 seconds

Object Detection Algorithm Using Edge Information on the Sea Environment (해양 환경에서 에지 정보를 이용한 물표 추출 알고리즘)

  • Jeong, Jong-Myeon;Park, Gyei-Kark
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.9
    • /
    • pp.69-76
    • /
    • 2011
  • According to the related reports, about 60 percents of ship collisions have resulted from operating mistake caused by human factor. Specially, the report said that negligence of observation caused 66.8 percents of the accidents due to a human factor. Hence automatic detection and tracking of an object from an IR images are crucial for safety navigation because it can relieve officer's burden and remedies imperfections of human visual system. In this paper, we present a method to detect an object such as ship, rock and buoy from a sea IR image. Most edge directions of the sea image are horizontal and most vertical edges come out from the object areas. The presented method uses them as a characteristic for the object detection. Vertical edges are extracted from the input image and isolated edges are eliminated. Then morphological closing operation is performed on the vertical edges. This caused vertical edges that actually compose an object be connected and become an object candidate region. Next, reference object regions are extracted using horizontal edges, which appear on the boundaries between surface of the sea and the objects. Finally, object regions are acquired by sequentially integrating reference region and object candidate regions.

Development of a Lane Detect Algorithm from Road-Facing Cameras on a Vehicle (차량에 부착된 측하방 CCD카메라를 이용한 차선추출 알고리즘 개발)

  • Rhee, Soo-Ahm;Lee, Tae-Yoon;Kim, Tae-Jung;Sung, Jung-Gon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.87-94
    • /
    • 2005
  • 3D positional information of lane can be automatically calculated tv combining GPS data, IMU data if coordinates of lane centers are given. The Road Safety Survey and Analysis Vehicle(RoSSAV) is currently under development to analyze three dimensional safety and stability of roads. RoSSAV has GPS and IMU sensors to get positional information of the vehicle and two road-facing CCD cameras for extraction of lane coordinates. In this paper, we develop technology that automatically detects centers of lanes from the road-facing cameras of RoSSAV. The proposed algorithm defines line-support regions by grouping pixels with similar edge orientation and magnitude together and extracts a line from each line support region by planar fitting. Then if extracted lines and the region in-between satisfy the criteria of brightness and width, we decide this region as lane. The proposed algorithm was more precise and stable than the previously proposed algorithm based on brightness threshold method. Experiments with real road scenes confirmed that lane was effectively extracted by the proposed algorithm.

  • PDF

Iterative Generalized Hough Transform using Multiresolution Search (다중해상도 탐색을 이용한 반복 일반화 허프 변환)

  • ;W. Nick Street
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.973-982
    • /
    • 2003
  • This paper presents an efficient method for automatically detecting objects in a given image. The GHT is a robust template matching algorithm for automatic object detection in order to find objects of various shapes. Many different templates are applied by the GHT in order to find objects of various shapes and size. Every boundary detected by the GHT scan be used as an initial outline for more precise contour-finding techniques. The main weakness of the GHT is the excessive time and memory requirements. In order to overcome this drawback, the proposed algorithm uses a multiresolution search by scaling down the original image to half-sized and quarter-sized images. Using the information from the first iterative GHT on a quarter-sized image, the range of nuclear sizes is determined to limit the parameter space of the half-sized image. After the second iterative GHT on the half-sized image, nuclei are detected by the fine search and segmented with edge information which helps determine the exact boundary. The experimental results show that this method gives reduction in computation time and memory usage without loss of accuracy.

Insertion Path Extraction of Catheter for Coronary Angiography (관상동맥 조영술을 위한 카테터 삽입 경로 추출)

  • Kim, Sung-Hu;Lee, Ju-Won;Kim, Joo-Ho;Lee, Han-Wook;Jung, Won-Geun;Lee, Gun-Ki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.951-956
    • /
    • 2011
  • Coronary angiography technology is usually used for examining or treating coronary artery stenosis. Especially, when a cardiologist inserts catheter into the heart blood vessel, the catheter path detection system is needed because the cardiologist has difficulty in not damaging vessel. Recently, to reduce this difficulty, many searchers have been working for the various image processing technologies, such as vessel edge detection, optimal threshold method, etc. However the results of these searches are showing different performances depend on the contrast and quality of images. Therefore, this study for the coronary angiography suggests a novel algorithm to avoid these problems. The suggested algorithm consists of multi-sampling, interpolation, threshold method, and fault points elimination. To evaluate the performance of the proposed method, we used several angiographic images in experimentation, and we found that the proposed method is effective for detecting the catheter insertion path.

Refinement of Building Boundary using Airborne LiDAR and Airphoto (항공 LiDAR와 항공사진을 이용한 건물 경계 정교화)

  • Kim, Hyung-Tae;Han, Dong-Yeob
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.11 no.3
    • /
    • pp.136-150
    • /
    • 2008
  • Many studies have been carried out for automatic extraction of building by LiDAR data or airphoto. Combining the benefits of 3D location information data and shape information data of image can improve the accuracy. So, in this research building recognition algorithm based on contour was used to improve accuracy of building recognition by LiDAR data and elaborate building boundary recognition by airphoto. Building recognition algorithm based on contour can generate building boundary and roof structure information. Also it shows better accuracy of building detection than the existing recognition methods based on TIN or NDSM. Out of creating buffers in regular size on the building boundary which is presumed by contour, this research limits the boundary area of airphoto and elaborate building boundary to fit into edge of airphoto by double active contour. From the result of this research, 3D building boundary will be able to be detected by optimal matching on the constant range of extracted boundary in the future.

  • PDF

Skew correction of face image using eye components extraction (눈 영역 추출에 의한 얼굴 기울기 교정)

  • Yoon, Ho-Sub;Wang, Min;Min, Byung-Woo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.12
    • /
    • pp.71-83
    • /
    • 1996
  • This paper describes facial component detection and skew correction algorithm for face recognition. We use a priori knowledge and models about isolated regions to detect eye location from the face image captured in natural office environments. The relations between human face components are represented by several rules. We adopt an edge detection algorithm using sobel mask and 8-connected labelling algorith using array pointers. A labeled image has many isolated components. initially, the eye size rules are used. Eye size rules are not affected much by irregular input image conditions. Eye size rules size, and limited in the ratio between gorizontal and vertical sizes. By the eye size rule, 2 ~ 16 candidate eye components can be detected. Next, candidate eye parirs are verified by the information of location and shape, and one eye pair location is decided using face models about eye and eyebrow. Once we extract eye regions, we connect the center points of the two eyes and calculate the angle between them. Then we rotate the face to compensate for the angle so that the two eyes on a horizontal line. We tested 120 input images form 40 people, and achieved 91.7% success rate using eye size rules and face model. The main reasons of the 8.3% failure are due to components adjacent to eyes such as eyebrows. To detect facial components from the failed images, we are developing a mouth region processing module.

  • PDF

Optimization of Image Tracking Algorithm Used in 4D Radiation Therapy (4차원 방사선 치료시 영상 추적기술의 최적화)

  • Park, Jong-In;Shin, Eun-Hyuk;Han, Young-Yih;Park, Hee-Chul;Lee, Jai-Ki;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.8-14
    • /
    • 2012
  • In order to develop a Patient respiratory management system includinga biofeedback function for4-dimentional radiation therapy, this study investigated anoptimal tracking algorithmfor moving target using IR (Infra-red) camera as well as commercial camera. A tracking system was developed by LabVIEW 2010. Motion phantom images were acquired using a camera (IR or commercial). After image process were conducted to convert acquired image to binary image by applying a threshold values, several edge enhance methods such as Sobel, Prewitt, Differentiation, Sigma, Gradient, Roberts, were applied. The targetpattern was defined in the images, and acquired image from a moving targetwas tracked by matching pre-defined tracking pattern. During the matching of imagee, thecoordinateof tracking point was recorded. In order to assess the performance of tracking algorithm, the value of score which represents theaccuracy of pattern matching was defined. To compare the algorithm objectively, we repeat experiments 3 times for 5 minuts for each algorithm. Average valueand standard deviations (SD) of score were automatically calculatedsaved as ASCII format. Score of threshold only was 706, and standard deviation was 84. The value of average and SD for other algorithms which combined edge detection method and thresholdwere 794, 64 in Sobel, 770, 101 in Differentiation, 754, 85 in Gradient, 763, 75 in Prewitt, 777, 93 in Roberts, and 822, 62 in Sigma, respectively. According to score analysis, the most efficient tracking algorithm is the Sigma method. Therefore, 4-dimentional radiation threapy is expected tobemore efficient if threshold and Sigma edge detection method are used together in target tracking.

Robust Vision Based Algorithm for Accident Detection of Crossroad (교차로 사고감지를 위한 강건한 비젼기반 알고리즘)

  • Jeong, Sung-Hwan;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.3
    • /
    • pp.117-130
    • /
    • 2011
  • The purpose of this study is to produce a better way to detect crossroad accidents, which involves an efficient method to produce background images in consideration of object movement and preserve/demonstrate the candidate accident region. One of the prior studies proposed an employment of traffic signal interval within crossroad to detect accidents on crossroad, but it may cause a failure to detect unwanted accidents if any object is covered on an accident site. This study adopted inverse perspective mapping to control the scale of object, and proposed different ways such as producing robust background images enough to resist surrounding noise, generating candidate accident regions through information on object movement, and by using edge information to preserve and delete the candidate accident region. In order to measure the performance of proposed algorithm, a variety of traffic images were saved and used for experiment (e.g. recorded images on rush hours via DVR installed on crossroad, different accident images recorded in day and night rainy days, and recorded images including surrounding noise of lighting and shades). As a result, it was found that there were all 20 experiment cases of accident detected and actual effective rate of accident detection amounted to 76.9% on average. In addition, the image processing rate ranged from 10~14 frame/sec depending on the area of detection region. Thus, it is concluded that there will be no problem in real-time image processing.

Development of the Algorithm for Traffic Accident Auto-Detection in Signalized Intersection (신호교차로 내 실시간 교통사고 자동검지 알고리즘 개발)

  • O, Ju-Taek;Im, Jae-Geuk;Hwang, Bo-Hui
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.97-111
    • /
    • 2009
  • Image-based traffic information collection systems have entered widespread adoption and use in many countries since these systems are not only capable of replacing existing loop-based detectors which have limitations in management and administration, but are also capable of providing and managing a wide variety of traffic related information. In addition, these systems are expanding rapidly in terms of purpose and scope of use. Currently, the utilization of image processing technology in the field of traffic accident management is limited to installing surveillance cameras on locations where traffic accidents are expected to occur and digitalizing of recorded data. Accurately recording the sequence of situations around a traffic accident in a signal intersection and then objectively and clearly analyzing how such accident occurred is more urgent and important than anything else in resolving a traffic accident. Therefore, in this research, we intend to present a technology capable of overcoming problems in which advanced existing technologies exhibited limitations in handling real-time due to large data capacity such as object separation of vehicles and tracking, which pose difficulties due to environmental diversities and changes at a signal intersection with complex traffic situations, as pointed out by many past researches while presenting and implementing an active and environmentally adaptive methodology capable of effectively reducing false detection situations which frequently occur even with the Gaussian complex model analytical method which has been considered the best among well-known environmental obstacle reduction methods. To prove that the technology developed by this research has performance advantage over existing automatic traffic accident recording systems, a test was performed by entering image data from an actually operating crossroad online in real-time. The test results were compared with the performance of other existing technologies.

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.