• 제목/요약/키워드: model image

검색결과 6,554건 처리시간 0.037초

나노 와이어 MOSFET 구조의 광검출기를 가지는 SOI CMOS 이미지 센서의 픽셀 설계 (Design of SOI CMOS image sensors using a nano-wire MOSFET-structure photodetector)

  • 도미영;신영식;이성호;박재현;서상호;신장규;김훈
    • 센서학회지
    • /
    • 제14권6호
    • /
    • pp.387-394
    • /
    • 2005
  • In order to design SOI CMOS image sensors, SOI MOSFET model parameters were extracted using the equation of bulk MOSFET model parameters and were optimized using SPICE level 2. Simulated I-V characteristics of the SOI NMOSFET using the extracted model parameters were compared to the experimental I-V characteristics of the fabricated SOI NMOSFET. The simulation results agreed well with experimental results. A unit pixel for SOI CMOS image sensors was designed and was simulated for the PPS, APS, and logarithmic circuit using the extracted model parameters. In these CMOS image sensors, a nano-wire MOSFET photodetector was used. The output voltage levels of the PPS and APS are well-defined as the photocurrent varied. It is confirmed that SOI CMOS image sensors are faster than bulk CMOS image sensors.

Pest Control System using Deep Learning Image Classification Method

  • Moon, Backsan;Kim, Daewon
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권1호
    • /
    • pp.9-23
    • /
    • 2019
  • In this paper, we propose a layer structure of a pest image classifier model using CNN (Convolutional Neural Network) and background removal image processing algorithm for improving classification accuracy in order to build a smart monitoring system for pine wilt pest control. In this study, we have constructed and trained a CNN classifier model by collecting image data of pine wilt pest mediators, and experimented to verify the classification accuracy of the model and the effect of the proposed classification algorithm. Experimental results showed that the proposed method successfully detected and preprocessed the region of the object accurately for all the test images, resulting in showing classification accuracy of about 98.91%. This study shows that the layer structure of the proposed CNN classifier model classified the targeted pest image effectively in various environments. In the field test using the Smart Trap for capturing the pine wilt pest mediators, the proposed classification algorithm is effective in the real environment, showing a classification accuracy of 88.25%, which is improved by about 8.12% according to whether the image cropping preprocessing is performed. Ultimately, we will proceed with procedures to apply the techniques and verify the functionality to field tests on various sites.

Adaptive Attention Annotation Model: Optimizing the Prediction Path through Dependency Fusion

  • Wang, Fangxin;Liu, Jie;Zhang, Shuwu;Zhang, Guixuan;Zheng, Yang;Li, Xiaoqian;Liang, Wei;Li, Yuejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권9호
    • /
    • pp.4665-4683
    • /
    • 2019
  • Previous methods build image annotation model by leveraging three basic dependencies: relations between image and label (image/label), between images (image/image) and between labels (label/label). Even though plenty of researches show that multiple dependencies can work jointly to improve annotation performance, different dependencies actually do not "work jointly" in their diagram, whose performance is largely depending on the result predicted by image/label section. To address this problem, we propose the adaptive attention annotation model (AAAM) to associate these dependencies with the prediction path, which is composed of a series of labels (tags) in the order they are detected. In particular, we optimize the prediction path by detecting the relevant labels from the easy-to-detect to the hard-to-detect, which are found using Binary Cross-Entropy (BCE) and Triplet Margin (TM) losses, respectively. Besides, in order to capture the inforamtion of each label, instead of explicitly extracting regional featutres, we propose the self-attention machanism to implicitly enhance the relevant region and restrain those irrelevant. To validate the effective of the model, we conduct experiments on three well-known public datasets, COCO 2014, IAPR TC-12 and NUSWIDE, and achieve better performance than the state-of-the-art methods.

이중스케일분해기와 미세정보 보존모델에 기반한 다중 모드 의료영상 융합연구 (Multimodal Medical Image Fusion Based on Two-Scale Decomposer and Detail Preservation Model)

  • 장영매;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.655-658
    • /
    • 2021
  • The purpose of multimodal medical image fusion (MMIF) is to integrate images of different modes with different details into a result image with rich information, which is convenient for doctors to accurately diagnose and treat the diseased tissues of patients. Encouraged by this purpose, this paper proposes a novel method based on a two-scale decomposer and detail preservation model. The first step is to use the two-scale decomposer to decompose the source image into the energy layers and structure layers, which have the characteristic of detail preservation. And then, structure tensor operator and max-abs are combined to fuse the structure layers. The detail preservation model is proposed for the fusion of the energy layers, which greatly improves the image performance. The fused image is achieved by summing up the two fused sub-images obtained by the above fusion rules. Experiments demonstrate that the proposed method has superior performance compared with the state-of-the-art fusion methods.

Active Appearance Model을 이용한 얼굴 추적 시스템 (Face Tracking System using Active Appearance Model)

  • 조경식;김용국
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.1044-1049
    • /
    • 2006
  • 얼굴 추적은 Vision base HCI의 핵심인 얼굴인식, 표정인식 그리고 Gesture recognition등의 다른 여러 기술을 지원하는 중요한 기술이다. 이런 얼굴 추적기술에는 영상(Image)의 Color또는 Contour등의 불변하는 특징들을 사용 하거나 템플릿(template)또는 형태(appearance)를 사용하는 방법 등이 있는데 이런 방법들은 조명환경이나 주위 배경등의 외부 환경에 민감하게 반응함으로 해서 다양한 환경에 사용할 수 없을 뿐더러 얼굴영상만을 정확하게 추출하기도 쉽지 않은 실정이다. 이에 본 논문에서는 deformable한 model을 사용하여 model과 유사한 shape과 appearance를 찾아 내는 AAM(Active Appearance Model)을 사용하는 얼굴 추적 시스템을 제안하고자 한다. 제안된 시스템에는 기존의 Combined AAM이 아닌 Independent AAM을 사용하였고 또한 Fitting Algorithm에 Inverse Compositional Image Alignment를 사용하여 Fitting 속도를 향상 시켰다. AAM Model을 만들기 위한 Train set은 150장의 4가지 형태에 얼굴을 담고 있는 Gray-scale 영상을 사용 하였다. Shape Model은 각 영상마다 직접 표기한 47개의 Vertex를 Trianglize함으로서 생성되는 71개의 Triangles을 하나의 Mesh로 구성하여 생성 하였고, Appearance Model은 Shape 안쪽의 모든 픽셀을 사용해서 생성하였다. 시스템의 성능 평가는 Fitting후 Shape 좌표의 정확도를 측정 함으로서 평가 하였다.

  • PDF

PROTOTYPE AUTOMATIC SYSTEM FOR CONSTRUCTING 3D INTERIOR AND EXTERIOR IMAGE OF BIOLOGICAL OBJECTS

  • Park, T. H.;H. Hwang;Kim, C. S.
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.II
    • /
    • pp.318-324
    • /
    • 2000
  • Ultrasonic and magnetic resonance imaging systems are used to visualize the interior states of biological objects. These nondestructive methods have many advantages but too much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct some biological objects to get the interior and exterior information, constructing 3D image from the series of the sliced sectional images gives more useful information with relatively low cost. In this paper, PC based automatic 3D model generator was developed. The system was composed of three modules. One is the object handling and image acquisition module, which feeds and slices objects sequentially and maintains the paraffin cool to be in solid state and captures the sectional image consecutively. The second is the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last is the image processing and visualization module, which processes a series of acquired sectional images and generates 3D graphic model. The handling module was composed of the gripper, which grasps and feeds the object and the cutting device, which cuts the object by moving cutting edge forward and backward. Sliced sectional images were acquired and saved in the form of bitmap file. The 3D model was generated to obtain the volumetric information using these 2D sectional image files after being segmented from the background paraffin. Once 3-D model was constructed on the computer, user could manipulate it with various transformation methods such as translation, rotation, scaling including arbitrary sectional view.

  • PDF

개선된 IAFC 모델을 이용한 영상 대비 향상 기법 (An Image Contrast Enhancement Technique Using the Improved Integrated Adaptive Fuzzy Clustering Model)

  • 이금분;김용수
    • 한국지능시스템학회논문지
    • /
    • 제11권9호
    • /
    • pp.777-781
    • /
    • 2001
  • 본 논문은 저대비 영상을 처리하여 보다 향상된 영상을 얻고자 펴지 함소와 개선된 IAFC 모델을 적용한 영상 대비 향상 기법을 제안한다. 저대비에 의한 영상 정보의 불확실성이 무작위성보다 명암도의 모호성과 퍼지성에 근거한다는 점에서 퍼지 집합이론을 영상 향상 기법을 개발하는데 적용한다. 영상 향상의 단계를 퍼지화, 대비 강화 연산, 비퍼지화 단계로 나눠볼 수 있으며, 퍼지화 및 비퍼지화 과정에서 적절한 교차점 선택이 요구되고 이때 개선된 IAFC 모델을 적용하여 최적의 교차점을 선택한다. 데이터 대한 정신없이 임계 파라미터를 조정함으로써 클러스터링을 할 수 있는 개선된 IAFC 모델로 두 클래스만을 형성하도록 하여 명암도의 애매성이 최대가 되는 교차점을 찾아 대비를 강화시킨다. 대비 향상의 정략적 측정을 위해 퍼지성 지수를 사용하며 히스토그램 균등화 기법을 사용한 대비 향상 결과와 비교한다. 저대비 영상에 대해 최적의 교차점의 위치를 정하는 제안한 기법의 결과가 많은 실험영상을 통해 우수함을 보여주고 있다.

  • PDF

Adaptive Transform Image Coding by Fuzzy Subimage Classification

  • Kong, Seong-Gon
    • 한국지능시스템학회논문지
    • /
    • 제2권2호
    • /
    • pp.42-60
    • /
    • 1992
  • An adaptive fuzzy system can efficiently classify subimages into four categories according to image activity level for image data compression. The system estimates fuzzy rules by clustering input-output data generated from a given adaptive transform image coding process. The system encodes different images without modification and reduces side information when encoding multiple images. In the second part, a fuzzy system estimates optimal bit maps for the four subimage classes in noisy channels assuming a Gauss-Markov image model. The fuzzy systems respectively estimate the sampled subimage classification and the bit-allocation processes without a mathematical model of how outputs depend on inputs and without rules articulated by experts.

  • PDF

선형판별법에 의한 GMS 영상의 객관적 운형분류 (Objective Cloud Type Classification of Meteorological Satellite Data Using Linear Discriminant Analysis)

  • 서애숙;김금란
    • 대한원격탐사학회지
    • /
    • 제6권1호
    • /
    • pp.11-24
    • /
    • 1990
  • This is the study about the meteorological satellite cloud image classification by objective methods. For objective cloud classification, linear discriminant analysis was tried. In the linear discriminant analysis 27 cloud characteristic parameters were retrieved from GMS infrared image data. And, linear cloud classification model was developed from major parameters and cloud type coefficients. The model was applied to GMS IR image for weather forecasting operation and cloud image was classified into 5 types such as Sc, Cu, CiT, CiM and Cb. The classification results were reasonably compared with real image.

Development of Camera-Based Measurement System for Crane Spreader Position using Foggy-degraded Image Restoration Technique

  • Kim, Young-Bok
    • 한국항해항만학회지
    • /
    • 제35권4호
    • /
    • pp.317-321
    • /
    • 2011
  • In this paper, a foggy-degraded image restoration technique with a physics-based degradation model is proposed for the measurement system. When the degradation model is used for the image restoration, its parameters and a distance from the spreader to the camera have to be previously known. In the proposed image restoration technique, the parameters are estimated from variances and averages of intensities on two foggy-degraded landmark images taken at different distances. Foggy-degraded images can be restored with the estimated parameters and the distance measured by the measurement system. On the basis of the experimental results, the performance of the proposed foggy-degraded image restoration technique was verified.