• Title/Summary/Keyword: Gray-scale

Search Result 436, Processing Time 0.023 seconds

A Camera Based Traffic Signal Generating Algorithm for Safety Entrance of the Vehicle into the Joining Road (차량의 안전한 합류도로 진입을 위한 단일 카메라 기반 교통신호 발생 알고리즘)

  • Jeong Jun-Ik;Rho Do-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.66-73
    • /
    • 2006
  • Safety is the most important for all traffic management and control technology. This paper focuses on developing a flexible, reliable and real-time processing algorithm which is able to generate signal for the entering vehicle at the joining road through a camera and image processing technique. The images obtained from the camera located beside and upon the road can be used for traffic surveillance, the vehicle's travel speed measurement, predicted arriving time in joining area between main road and joining road. And the proposed algorithm displays the confluence safety signal with red, blue and yellow color sign. The three methods are used to detect the vehicle which is driving in setted detecting area. The first method is the gray scale normalized correlation algorithm, and the second is the edge magnitude ratio changing algorithm, and the third is the average intensity changing algorithm The real-time prototype confluence safety signal generation algorithm is implemented on stored digital image sequences of real traffic state and a program with good experimental results.

Detection of Fallen Pear Bags caused by Natural Disaster (자연 재해로 인하여 낙과된 무채색 배 봉지 검출)

  • Choi, Doo-Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.1
    • /
    • pp.153-158
    • /
    • 2016
  • A detection algorithm of fallen pear bags caused by natural disaster like heavy rain, typhoon, hurricane, etc. is presented in this paper. The algorithm is developed for the gray pear bags with printed characters which are widely used at pear farms at Sangju and Naju producing large quantity of pears for export. It sets a region of interest (ROI) at first and then eliminates the regions having chromatic color in ROI. Morphological operation and prior information are used to eliminate small noises and several unusual regions and finally the regions of fallen pear bags are remained. The remained regions are analyzed and counted to estimate the scale of damage. Test images are consisted of the images taken at pear farms of Sangju and Naju at 2014. Experimental result shows that the detection rate of pear bags is more than 90% and also the proposed system can be implemented in real-time using hand-held devices because of its simple and parallel architecture.

Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

  • Mir, Arash Poorsattar Bejeh;Mir, Morvarid Poorsattar Bejeh
    • Imaging Science in Dentistry
    • /
    • v.42 no.3
    • /
    • pp.163-167
    • /
    • 2012
  • Purpose: ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Materials and Methods: Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). Results: The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup ($R^2$=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Conclusion: Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

3-Tire File Encryption algorithm using GSF (GSF(GrayScale File) 출력을 이용한 3-Tire 파일 암호화 알고리즘)

  • Kim Young-Shil;Kim Young-Mi;Kim Ryun-Ok;Baik Doo-Kwon
    • The Journal of Information Technology
    • /
    • v.5 no.4
    • /
    • pp.115-127
    • /
    • 2002
  • This paper proposes improved file encryption algorithm which represents image of grayscale type not using proper cover image for ciphertext. This method consists of 3-Tire encryption steps. 1-Tire and 2-Tire encrypt the information using existed stream algorithm and block algorithm with modyfied padding method. We propose the MBE method as 3-Tire, which hides structure and format of encrypted file. The proposed method outputs grayscale file as the result of encryption and since many GSF outputs resulted from different kinds plaintexts, have similar patterns. we obtain both file encryption and hiding the file information. Also, to solve the problem of padding in block algorithm, we propose the new padding algorithm called SELI(Select Insert) and apply 2-Tire block algorithm and MBE algorithm used 3-Tire.

  • PDF

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Content based Image Retrieval using RGB Maximum Frequency Indexing and BW Clustering (RGB 최대 주파수 인덱싱과 BW 클러스터링을 이용한 콘텐츠 기반 영상 검색)

  • Kang, Ji-Young;Beak, Jung-Uk;Kang, Gwang-Won;An, Young-Eun;Park, Jong-An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.1 no.2
    • /
    • pp.71-79
    • /
    • 2008
  • This study proposed a content-based image retrieval system that uses RGB maximum frequency indexing and BW clustering in order to deal with existing retrieval errors using histogram. We split RGB from RGB color images, obtained histogram which was evenly split into 32 bins, calculated and analysed pixels of each area at histogram of R, G, B and obtained the maximum value. We indexed the color information obtained, obtained 100 similar images using the values, operated the final image retrieval system using the total number and distribution rate of clusters. The algorithm proposed in this study used space information using the features obtained from R, G, and B and clusters to obtain effective features, which overcame the disadvantage of existing gray-scale algorithm that perceived different images as same if they have the same frequencies of shade. As a result of measuring the performances using Recall and Precision, this study found that the retrieval rate and priority of the proposed algorithm are more outstanding than those of existing algorithm.

  • PDF

Multi-Level Digital Watermarking for Color Image of Multimedia Contents (멀티미디어 컨텐츠의 컬러 영상에 대한 다중 레벨 디지털 워터마킹)

  • Park, Hung-Bog;Seo, Jung-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.11
    • /
    • pp.1946-1953
    • /
    • 2006
  • Because the embedded watermark of luminance component guarantees the extraction of ownership information when the color image is converted to gray scale image, the information of ownership right as to the luminance component is embedded in the luminance-chrominance color space such as YCbCr. Therefore, this paper proposes watermark embedding, extraction and authentication algorithm of color image. which considers the device and performance of multimedia contents service by focusing on the robustness and invisibility of watermark. The color image is converted from RGB color space to YCbCr color space, and then the properties of each component of Y(Luminance), Cb(Color Differences) and Cr(Color Differences) are considered in order to embed, extract and certify multi-level watermark in the frequency domain based on the wavelet. As a result, it can guaranteed the robustness for the JPEG compression and invisibility of watermark for multi-level.

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.

Design of Improved UI of Automatic Parking Management System using License Plate Recognition (번호판 인식을 통한 자동 주차관리 시스템의 개선된 UI 설계)

  • Kim, Bong-Gi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.2
    • /
    • pp.1083-1088
    • /
    • 2014
  • Recently, due to advances in both imaging technology and ICT, various types of image processing services became available and the application services of these two technologies are diversifying. Recognition of vehicle license plates is used in places where vehicle information is needed such as in parking management. However, existing systems have economic disadvantages like issuing parking tickets and attaching unnecessary equipment. In order to solve these problems, we designed and implemented automatic parking management system through recognition of vehicle license plates by using emguCV that is based on OpenCV. Additionally, we designed improved UI to handle the entire parking management situation which include information such as details of each parking vehicle, parking time and remaining parking spaces without screen movement. This improved UI is implemented with the use of WPF which is the latest technology in user program development. The emguCV used in this paper showed the most optimized performance in Intel based environment. With it, we obtained the result of within 0.5 seconds of recognition processing time and over 90% of recognition rate. Through improved UI, the manager could both simply and intuitively manage the entire system.

Video-based Intelligent Unmanned Fire Surveillance System (영상기반 지능형 무인 화재감시 시스템)

  • Jeon, Hyoung-Seok;Yeom, Dong-Hae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.516-521
    • /
    • 2010
  • In this paper, we propose a video-based intelligent unmanned fire surveillance system using fuzzy color models. In general, to detect heat or smoke, a separate device is required for a fire surveillance system, this system, however, can be implemented by using widely used CCTV, which does not need separate devices and extra cost. The systems called video-based fire surveillance systems use mainly a method extracting smoke or flame from an input image only. The smoke is difficult to extract at night because of its gray-scale color, and the flame color depends on the temperature, the inflammable, the size of flame, etc, which makes it hard to extract the flame region from the input image. This paper deals with a intelligent fire surveillance system which is robust against the variation of the flame color, especially at night. The proposed system extracts the moving object from the input image, makes a decision whether the object is the flame or not by means of the color obtained by fuzzy color model and the shape obtained by histogram, and issues a fire alarm when the flame is spread. Finally, we verify the efficiency of the proposed system through the experiment of the controlled real fire.