• Title/Summary/Keyword: sample pixel

Search Result 104, Processing Time 0.029 seconds

Development of Crack Detection System for Highway Tunnels using Imaging Device and Deep Learning (영상장비와 딥러닝을 이용한 고속도로 터널 균열 탐지 시스템 개발)

  • Kim, Byung-Hyun;Cho, Soo-Jin;Chae, Hong-Je;Kim, Hong-Ki;Kang, Jong-Ha
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.4
    • /
    • pp.65-74
    • /
    • 2021
  • In order to efficiently inspect rapidly increasing old tunnels in many well-developed countries, many inspection methodologies have been proposed using imaging equipment and image processing. However, most of the existing methodologies evaluated their performance on a clean concrete surface with a limited area where other objects do not exist. Therefore, this paper proposes a 6-step framework for tunnel crack detection deep learning model development. The proposed method is mainly based on negative sample (non-crack object) training and Cascade Mask R-CNN. The proposed framework consists of six steps: searching for cracks in images captured from real tunnels, labeling cracks in pixel level, training a deep learning model, collecting non-crack objects, retraining the deep learning model with the collected non-crack objects, and constructing final training dataset. To implement the proposed framework, Cascade Mask R-CNN, an instance segmentation model, was trained with 1561 general crack images and 206 non-crack images. In order to examine the applicability of the trained model to the real-world tunnel crack detection, field testing is conducted on tunnel spans with a length of about 200m where electric wires and lights are prevalent. In the experimental result, the trained model showed 99% precision and 92% recall, which shows the excellent field applicability of the proposed framework.

Transferring Calibrations Between on Farm Whole Grain NIR Analysers

  • Clancy, Phillip J.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1210-1210
    • /
    • 2001
  • On farm analysis of protein, moisture and oil in cereals and oil seeds is quickly being adopted by Australian farmers. The benefits of being able to measure protein and oil in grains and oil seeds are several : $\square$ Optimize crop payments $\square$ Monitor effects of fertilization $\square$ Blend on farm to meet market requirements $\square$ Off farm marketing - sell crop with load by load analysis However farmers are not NIR spectroscopists and the process of calibrating instruments has to the duty of the supplier. With the potential number of On Farm analyser being in the thousands, then the task of calibrating each instrument would be impossible, let alone the problems encountered with updating calibrations from season to season. As such, NIR technology Australia has developed a mechanism for \ulcorner\ulcorner\ulcorner their range of Cropscan 2000G NIR analysers so that a single calibration can be transferred from the master instrument to every slave instrument. Whole grain analysis has been developed over the last 10 years using Near Infrared Transmission through a sample of grain with a pathlength varying from 5-30mm. A continuous spectrum from 800-1100nm is the optimal wavelength coverage fro these applications and a grating based spectrophotometer has proven to provide the best means of producing this spectrum. The most important aspect of standardizing NIB instruments is to duplicate the spectral information. The task is to align spectrum from the slave instruments to the master instrument in terms of wavelength positioning and then to adjust the spectral response at each wavelength in order that the slave instruments mimic the master instrument. The Cropscan 2000G and 2000B Whole Grain Analyser use flat field spectrographs to produce a spectrum from 720-1100nm and a silicon photodiode array detector to collect the spectrum at approximately 10nm intervals. The concave holographic gratings used in the flat field spectrographs are produced by a process of photo lithography. As such each grating is an exact replica of the original. To align wavelengths in these instruments, NIR wheat sample scanned on the master and the slave instruments provides three check points in the spectrum to make a more exact alignment. Once the wavelengths are matched then many samples of wheat, approximately 10, exhibiting absorbances from 2 to 4.5 Abu, are scanned on the master and then on each slave. Using a simple linear regression technique, a slope and bias adjustment is made for each pixel of the detector. This process corrects the spectral response at each wavelength so that the slave instruments produce the same spectra as the master instrument. It is important to use as broad a range of absorbances in the samples so that a good slope and bias estimate can be calculated. These Slope and Bias (S'||'&'||'B) factors are then downloaded into the slave instruments. Calibrations developed on the master instrument can then be downloaded onto the slave instruments and perform similarly to the master instrument. The data shown in this paper illustrates the process of calculating these S'||'&'||'B factors and the transfer of calibrations for wheat, barley and sorghum between several instruments.

  • PDF

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

The Evaluation of Cerebral Executive Function Using Functional MRI (기능적 자기공명영상기법을 이용한 대뇌의 집행기능 평가)

  • Eun, Sung Jong;Gook, Jin Seon;Kim, Jeong Jae
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.5
    • /
    • pp.305-311
    • /
    • 2013
  • This study involves an experiment using functional magnetic resonance imaging(fMRI) to delineate brain activation for execution functional performance. Participates to this experiment of the normal adult (man 4, woman 6) of 10 people, is not inserts the metal all closed phobia and 24.5 year-old average ages which the operating surgeon experience which are not they were. The subject for a functional MRI experiment word -color test prosecuting attorney subject rightly at magnetic pole presentation time of 30 first editions and after presenting, uses SPM 99 programs and the image realignment, after executing a standardization (nomalization), a difference which the signal burglar considers the timely order as lattice does, pixel each image will count there probably is, in order to examine rest and active crossroad dividing independence sample t-test (p<.05). Overlapped in this standard anatomic image and got a brain activation image from level of significance 95%. With functional MRI resultant execution function inside being relation, the prefrontal lobe, anterior cingulate gyrus, parietal lobe, orbitofrontal gyrus, temporal lobe, parietal lobe was activated. The execution function promotes a recovery major role from occupational therapy, understanding about the damage mechanism is important. When confirms the brain active area which accomplishes an execution function brain plasticity develops the cognitive therapeutic method which is effective increases usefully very, will be used.

Automatic Titration Using PC Camera in Acidity Analyses of Vinegar, Milk and Takju (PC 카메라를 이용한 식초, 우유 및 탁주의 산도 적정 자동화)

  • Lee, Hyeong-Choon
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.36 no.12
    • /
    • pp.1583-1588
    • /
    • 2007
  • PC-camera based automatic titration was executed in the acidity analyses of vinegar, milk and Takju. The average hue value (Havg) of 144 pixels in the image of the sample solution being titrated was computed and followed up at regular time intervals during titration in order to detect the titration end point. The Havg increase of 5 degrees from the first Havg was regarded as reaching at the end point in the cases of vinegar and milk. The Havg increase set up to detect the end point was 70 degrees in the case of Takju. In the case of vinegar, the volume of added titrant (0.1 N NaOH) was $21.409{\pm}0.066mL$ in manual titration and $21.403{\pm}0.055mL$ in automatic titration (p=0.841). In the case of milk, it was $1.390{\pm}0.025mL$ in manual titration and $1.388{\pm}0.027mL$ in automatic titration (p=0.907). In the case of Takju, it was $4.738{\pm}0.028mL$ in manual titration and $4.752{\pm}0.037mL$ in automatic titration (p=0.518). The high p values suggested that there were good agreements between manual and automatic titration data in all three food samples. The automatic method proposed in this article was considered to be applicable not only to acidity titrations but also to most titrations in which the end points can be detected by color change.

A COMPARISON OF PERIAPICAL RADIOGRAPHS AND THEIR DIGITAL IMAGES FOR THE DETECTION OF SIMULATED INTERPROXIMAL CARIOUS LESIONS (모의 인접면 치아우식병소의 진단을 위한 구내 표준방사선사진과 그 디지털 영상의 비교)

  • Kim Hyun;Chung Hyun-Dae
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.24 no.2
    • /
    • pp.279-290
    • /
    • 1994
  • The purpose of this study was to compare the diagnostic accuracy of periapical radiographs and their digitized images for the detection of simulated interproximal carious lesions. A total of 240 interproximal surfaces was used in this study. The case sample was composed of 80 anterior teeth, 80 bicuspids and 80 molars which were prepared in order to distribute the surfaces from carious free to those containing simulated carious lesions of varying depths (0.5㎜, 0.8㎜, and 1.2㎜). The periapical radiographs were taken by paralleling technique and film used was Kodak Ektaspeed(E group). All radiographs were evaluated by five dentist to recognize the true status of simulated carious lesion. They were asked to give a score of 0, 1, 2, or 3. Digitized images were obtained using a commercial video processor(FOTOVIX Ⅱ- XS). And the computer system was 486 DX PC with PC Vision and frame grabber. The 17' display monitor had a resolution of 1280×1024 pixels(0.26㎜ dot pitch). But the one frame of the intraoral radiograph has a resolution of 700×480 pixels and each pixel has a grey level value of 256. All the radiographs and digital images were viewed under uniform subdued lighting in the same reading room. After a week the second interpretation was performed in the same condition. The detection of lesions on the monitor was compared with the finding of simulated interproximal carious lesions on the film images. The results were as follows: 1. When the scoring criteria was dichotomous ; lesion present and not present 1) The overall sensitivity, specificity and diagnostic accuracy of periapical radiographs and their digital images showed no statistically significant difference. 2) The sensitivity and specificity according to the region of teeth and the grade of lesions showed no statistically significant difference between periapical radiographs and their digital images. 2. When estimate the grade of lesions ; score 0, 1, 2, 3 1) The overall diagnostic accuracy was 53.3% on the intraoral films and 52.9% on digital images. There was no significant difference. 2) The diagnostic accuracy according to the region of teeth showed no statistically significant difference between periapical radiographs and their digital images. 3. The degree of agreement and reliability 1) Using gamma value to show the degree of agreement, there was similarity between periapical films and digital images. 2) The reliability of each twice interpretation of periapical films and digital images showed no statistically significant difference. In all cases P value was greater than 0.05, showing that both techniques can be used to detect the incipient and moderate interproximal carious lesions with similar accuracy.

  • PDF

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

$1{times}8$ Array of GaAs/AlGaAs quantum well infrared photodetector with 7.8$\mu\textrm{m}$ peak response ($1{times}8$ 배열, 7.8 $\mu\textrm{m}$ 최대반응 GaAs/AlGaAs 양자우물 적외선 검출기)

  • 박은영;최정우;노삼규;최우석;박승한;조태희;홍성철;오병성;이승주
    • Korean Journal of Optics and Photonics
    • /
    • v.9 no.6
    • /
    • pp.428-432
    • /
    • 1998
  • We fabricated 1$\times$8 array of GaAs/AlGaAs quantum well infrared photodetectors for the long wavelength infrared detection which is based on the bound-continuum intersubband transition, and characterized its electrical and optical properties. The device was grown on SI-GaAs(100) by the molecular beam epitaxy and consisted of 25 period of 40 ${\AA} $ GaAs well and 500 ${\AA} $ $Al_{0.28} Ga_{0.72}$ As barrier. To reduce the possibility of interface states only the center 20 ${\AA} $ of the well was doped with Si ($N_D=2{\times}10^{18} cm^{-3}$). We etched the sample to make square mesas of 200$\times$200 $\mu\textrm{m}^2$ and made an ohmic contact on each pixel with Au/Ge. Current-voltage characteristics and photoresponse spectrum of each detector reveal that the array was highly uniform and stable. The spectral responsivity and the detectivity $D^*$ were measured to be 180,260 V/W and $4.9{\times}10^9cm\sqrt{Hz}/W$ respectively at the peak wavelength of $\lambda$ =7.8 $\mu\textrm{m}$ and at T=10 K.

  • PDF

Identification of Rice Species by Three Side (Top, Side and Front) Images of Brown Rice (현미 세 면(윗면, 측면, 앞면)의 화상을 이용한 품종 판별)

  • Kim, Sang-Sook;Lee, Sang-Hyo;Rhyu, Mee-Ra;Kim, Young-Jin
    • Korean Journal of Food Science and Technology
    • /
    • v.30 no.3
    • /
    • pp.473-479
    • /
    • 1998
  • Identification of rice species was attempted by three side (top, side and front) images of brown rice. Nine parameters of each image were area, aspect ratio, maximum diameter, minimum diameter, perimeter, roundness and red (R), green (G) and blue (B) pixel values of an image. Forty rice samples consisted of 19 species used for the study and total 27 image characteristics for a kernel were measured. For calibration and confirmation, 105 and 20 brown rice kernels per each sample were used respectively. For best identification of rice species, 24 image characteristics were selected for discriminant analysis. Average percentages for correct identification of rice species were 84.75% and 84.93% for calibration and confirmation data set, respectively. The highest and lowest percentage for correct identification were 99.05% for Nongan and 50.63% for Hwaseung respectively in calibration data. The confirmation data showed that the correct identification of Nongan or Paalgong was 100%, while that of Hwaseung was 47.62%. The result of the study showed that three side (top, side and front) image of brown rice was not suitable for identification of rice species suggesting that additional techniques are required for better discrimination of rice species.

  • PDF

An Efficient Composite Image Separation by Using Independent Component Analysis Based on Neural Networks (신경망 기반 독립성분분석을 이용한 효율적인 복합영상분리)

  • Cho, Yong-Hyun;Park, Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.210-218
    • /
    • 2002
  • This paper proposes an efficient separation method of the composite images by using independent component analysis(ICA) based on neural networks of the approximate learning algorithm. The Proposed learning algorithm is the fixed point(FP) algorithm based on Secant method which can be approximately computed by only the values of function for estimating the root of objective function for optimizing entropy. The secant method is an alternative of the Newton method which is essential to differentiate the function for estimating the root. It can achieve a superior property of the FP algorithm for ICA due to simplify the composite computation of differential process. The proposed algorithm has been applied to the composite signals and image generated by random mixing matrix in the 4 signal of 500-sample and the 10 images of $512{\times}512-pixel$, respectively The simulation results show that the proposed algorithm has better performance of the learning speed and the separation than those using the conventional algorithm based method. It also solved the training performances depending on initial points setting and the nonrealistic learning time for separating the large size image by using the conventional algorithm.