• 제목/요약/키워드: Image Transformation

검색결과 1,071건 처리시간 0.023초

Recognition of Car Manufacturers using Faster R-CNN and Perspective Transformation

  • Ansari, Israfil;Lee, Yeunghak;Jeong, Yunju;Shim, Jaechang
    • 한국멀티미디어학회논문지
    • /
    • 제21권8호
    • /
    • pp.888-896
    • /
    • 2018
  • In this paper, we report detection and recognition of vehicle logo from images captured from street CCTV. Image data includes both the front and rear view of the vehicles. The proposed method is a two-step process which combines image preprocessing and faster region-based convolutional neural network (R-CNN) for logo recognition. Without preprocessing, faster R-CNN accuracy is high only if the image quality is good. The proposed system is focusing on street CCTV camera where image quality is different from a front facing camera. Using perspective transformation the top view images are transformed into front view images. In this system, the detection and accuracy are much higher as compared to the existing algorithm. As a result of the experiment, on day data the detection and recognition rate is improved by 2% and night data, detection rate improved by 14%.

FWT-CIT를 적용한 그레이 영상의 의사컬러 변환 및 향상 (A Gray Image to Pseudocoloring Conversion and Enhancement Using FWT and CIT)

  • 류광렬
    • 한국정보통신학회논문지
    • /
    • 제8권7호
    • /
    • pp.1464-1468
    • /
    • 2004
  • 본 논문은 그레이 영상을 컬러영상으로 변환하고 컬러농도를 변환하여 출력영상을 향상시킨 연구이다. RGB 컬러성분을 추출하기 위한 의사컬러링은 2D고속웨이브릿 변환(FWT)에 의한 필터뱅크 재배열을 적용하고 후처리에서 각각의 모노컬러는 노이즈제거와 영상향상을 위해 이산 컬러농도변환(CIT)을 적용한다. 실험결과 출력영상은 일반 웨이블릿 변환 적용보다 PSNR 30dB이상 개선된다.

이산 웨이블렛 변환과 프렉탈 이론을 이용한 영상부호화 기법 (Image Compression Technique Using Discrete Wavelet Transform and Fractal Theory)

  • 김용호;정종근;편석범;이윤배
    • 대한전자공학회논문지TE
    • /
    • 제39권4호
    • /
    • pp.423-430
    • /
    • 2002
  • 현재 정지 영상 압축의 표준인 JPEG은 DCT(discrete cosine transform)를 취한 후에 압축과정을 수행하기 때문에 고 압축을 할 경우 블록화 현상이 심하게 일어나며, 고 압축시에 복원된 영상에 나타나는 왜곡(aliasing)등으로 영상의 품질이 낮아지는 단점이 있다. 또한 변환 부호화 방법은 높은 압축률을 가질 수 있으나 변환과 역 변환에 의한 화질에 열화가 발생할 수 있다. 본 논문에서는 이러한 문제점을 해결하고자 웨이블렛 변환과 프렉탈 이론을 정지 영상에 적용한 결과 낮은 비트율에서 기존의 방법보다 압축 후 복원시 속도의 향상, 압축율의 향상, 블록현상을 제거하였다. 그리고 복원화질이 기존의 방법보다 우수함을 보였다.

압축방식에 따른 디지털 인쇄사진의 품질 변화에 관한 연구 (A study on quality transformation of Digital printing photograph according to Comporession Method)

  • 조가람;구철회
    • 한국인쇄학회지
    • /
    • 제21권1호
    • /
    • pp.35-44
    • /
    • 2003
  • Because of computer developing, the digital image making the use of many a field of application with - web-above, electronic publishing. printing, dynamic image management and photo CD production etc., however many problems of save and management. The management image use of compression moth which don't have a affect on image, reduce file size. A study used sequential DCT0based mode and progressive DCT-based mode of JPEG(Joing Photographic Experts Group) compression method and Wavelet compression method. Therefore, the analog image and digital image was changed and applied by several stages according to compression rate. It made inquiries of the optimum compression rate that be compared quality transformation between original image and compressed image. As compression image was printing simply, the quality was studied by subjective valuation method, that was studied propriety and usefulness.

  • PDF

전자출판에서 입.출력 장치의 컬러 관리에 관한 연구 (I) (A Study on Color Management of Input and Output Device in Electronic Publishing (I))

  • 조가람;김재해;구철회
    • 한국인쇄학회지
    • /
    • 제25권1호
    • /
    • pp.11-26
    • /
    • 2007
  • In this paper, an experiment was done where the input device used the linear multiple regression and the sRGB color space to perform a color transformation. The output device used the GOG, GOGO and sRGB for the color transformation. After the input device underwent a color transformation, a $3\;{\times}\;20\;size$ matrix was used in a linear multiple regression and the scanner's color representation of scanner was better than a digital still camera's color representation. When using the sRGB color space, the original copy and the output copy had a color difference of 11. Therefore it was more efficient to use the linear multiple regression method than using the sRGB color space. After the input device underwent a color transformation, the additivity of the LCD monitor's R, G and B signal value improved and therefore the error in the linear formula transformation decreased. From this change, the LCD monitor with the GOG model applied to the color transformation became better than LCD monitors with other models applied to the color transformation. Also, the color difference varied more than 11 from the original target in CRT and LCD monitors when a sRGB color transformation was done in restricted conditions.

  • PDF

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출 (Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System)

  • 홍성훈;박대진
    • 대한임베디드공학회논문지
    • /
    • 제17권1호
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.

다중 클래스의 이미지 장면 분류 (Image Scene Classification of Multiclass)

  • 신성윤;이현창;신광성;김형진;이재완
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.551-552
    • /
    • 2021
  • 본 논문에서는 변환 학습에 기반을 둔 다중 클래스 영상 장면 분류 방법을 제시한다. ImageNet 대형 이미지 데이터 세트에서 사전 훈련된 네트워크 모델에 의존하여 다중 클래스의 자연 장면 이미지를 분류한다. 실험에서는 최적화된 ResNet 모델을 Kaggle의 Intel Image Classification 데이터 셋에 분류하여 우수한 결과를 얻었다.

  • PDF

영상변형:얼굴 스케치와 사진간의 증명가능한 영상변형 네트워크 (Image Translation: Verifiable Image Transformation Networks for Face Sketch-Photo and Photo-Sketch)

  • 숭타이리엥;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.451-454
    • /
    • 2019
  • In this paper, we propose a verifiable image transformation networks to transform face sketch to photo and vice versa. Face sketch-photo is very popular in computer vision applications. It has been used in some specific official departments such as law enforcement and digital entertainment. There are several existing face sketch-photo synthesizing methods that use feed-forward convolution neural networks; however, it is hard to assure whether the results of the methods are well mapped by depending only on loss values or accuracy results alone. In our approach, we use two Resnet encoder-decoder networks as image transformation networks. One is for sketch-photo and another is for photo-sketch. They depend on each other to verify their output results during training. For example, using photo-sketch transformation networks to verify the photo result of sketch-photo by inputting the result to the photo-sketch transformation networks and find loss between the reversed transformed result with ground-truth sketch. Likely, we can verify the sketch result as well in a reverse way. Our networks contain two loss functions such as sketch-photo loss and photo-sketch loss for the basic transformation stages and the other two-loss functions such as sketch-photo verification loss and photo-sketch verification loss for the verification stages. Our experiment results on CUFS dataset achieve reasonable results compared with the state-of-the-art approaches.

Wavelet 변환 방식을 이용한 인쇄물 평가에 관한 연구 (A study on print estimation using wavelet transformation method)

  • 김택준;조가람;구철희
    • 한국인쇄학회지
    • /
    • 제20권1호
    • /
    • pp.28-44
    • /
    • 2002
  • Wavelet transformation in image compression is to offer higher image compressibility and high-quality by quantization and entropy encoding. More image quality is good that reconstructed image by wavelet calculation than acquire cosine transform. Therefore, wavelet itself is function if it is wavelet's feature, in this function, do processing applying difference scale and resolution. That is, this is not that fixed resolution has been decided like existent compression way, when it regulated scale, damage goes in pixel and picture looks like break without giving damage entirely in reflex even if magnify or curtail Decoding. Therefore, this paper is in Image that using new wavelet application compression way research that see applies comparing In each image noted this time compressing step by step with circle image compression efficiency recognize. Also, estimated quality pass through by printing of compressed image, investigated compression ratio of most suitable that get print of high quality and elevation of transmission speed.

  • PDF