• Title/Summary/Keyword: image processing

Search Result 9,972, Processing Time 0.037 seconds

ULTRASTRUCTURAL ANALYSIS OF TOOTH PULP AFFERENTS TERMINALS IN THE MEDULLARY DORSAL HORN OF THE RAT (치수유래 구심성 신경섬유의 삼차신경 감각핵군에서의 연접특성)

  • Bae, Yong-Chul;Lee, Eun-Hee;Choy, Min-Ki;Hong, Su-Hyung;Kim, Hyun-Jung;Na, Soon-Hyeun;Kim, Young-Jin
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.28 no.2
    • /
    • pp.219-227
    • /
    • 2001
  • Little is known about processing mechanism of pain sensation of the oral cavity at the 1st synapse of trigeminal sensory nuclei. Serial ultrathin sections of tooth pulp afferent terminals, identified by the transganglionic transport of 1% wheatgerm agglutinin conjugated horseradish peroxidase, were investigated with electron microscope. Quantitative ultrastructural analysis was performed on digitizing tablet connected to Macintoshi personal computer (software; NIH Image 1.60, NIH, Bethesda, MD). Labeled boutons could be classified into two types by the shapes of containing vesicles : S bouton, which contained mainly spherical vesicles (Dia. 45-55 nm) and few large dense cored vesicles (Dia, 80-120nm), and LDCV bouton, which contained spherical vesicles as well as large number of large dense cored vesicles. Most of the parameters on the ultrastructural characteristic and synaptic organization of labeled boutons were similar between S and LDCV boutons, except shapes of containing vesicles. Majority of the labeled boutons showed simple synaptic arrangement. The labeled boutons were frequency presynaptic to dendritic spine, and to a lesser extent, dendritic shaft. They rarely synapsed with soma and adjacent proximal dendrite. A small proportion of labeled boutons made synaptic contacts with presynaptic, pleomorphic vesicles containing endings and synaptic triad. Morphometric parameters of labeled boutons including volume and surface area, total apposed area, mitochondrial volume, active zone area, vesicle number and density showed wide variation and these were not significantly different between S and LDCV boutons. The present study revealed characteristic features on ultrastructure and synaptic connection of pulpal afferents which may involved in transmission of oral pain sensation.

  • PDF

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

Biological stability of Zirconia/Alumina composite ceramic Implant abutment (지르코니아/알루미나 복합 지대주의 생물학적 안정성에 관한 연구)

  • Bae, Kyu-Hyun;Han, Jung-Suk.;Kim, Tae-Il;Seol, Yang-Jo;Lee, Yong-Moo;Ku, Young;Cho, Ki-Young;Chung, Chong-Pyoung;Han, Soo-Boo;Rhyu, In-Chul
    • Journal of Periodontal and Implant Science
    • /
    • v.36 no.2
    • /
    • pp.555-565
    • /
    • 2006
  • The purpose of the present study is to evaluate the biological stability of the zirconia/alumina composite abutment by histologic and radiographic examination in clinical cases. 17 partially edentulous patients (5 men and 12 women, mean age 47) were treated with 37 implants. The implants were placed following the standard two-stage protocol. After a healing period of 3 to 6 months, zirconia/alumina composite abutments were connected. All radiographs were taken using paralleling technique with individually fabricated impression bite block, following insertion of the prosthesis and at the 3-, 6-, 12 month re-examinations. After processing the obtained images, the osseous level was calculated using the digital image in the mesial and distal aspect in each implant. An ANOVA and t-test were used to test for difference between the baseline and 3-, 6-, 12 months re-examinations, and for difference between maxilla and mandible. Differences at P <0.05 were considered statistically significant. For histologic examination, sample was obtained from the palatal gingiva which implant functioned for 12 months. Sections were examined under a light microscope under various magnifications. Clinically, no abutment fracture or crack as well as periimplantitis was observed during the period of study. The mean bone level reduction(${\pm}standard$ deviation) was 0.34 rom(${\pm}\;0.26$) at 3-months, 0.4 2mm(${\pm}\;0.30$) at 6-months, 0.62 mm(${\pm}\;0.28$) at 12-months respectively. No statistically significant difference was found between baseline and 3-, 6-, 12-months re-examinations (p > 0.05). The mean bone level reduction in maxilla was 0.33(${\pm}0.25$) at 3-months, 0.36(${\pm}0.33$) at 6-months, 0.56(${\pm}0.26$) at 12-months. And the mean bone level reduction in mandible was 0.35(${\pm}0.27$) at 3-months, 0,49(${\pm}0.27$) at 6-months, 0.68(${\pm}0.30$) at 12-months. No statistical difference in bone level reduction between implants placed in the maxilla and mandible. Histologically, the height of the junctional epithelium was about 2.09 mm. And the width was about 0.51 mm. Scattered fibroblasts and inflammatory cells, and dense collagen network with few vascular structures characterized the portion of connective tissue. The inflammatory cell infiltration was observed just beneath the apical end of junctional epithelium and the area of direct in contact with zirconia/alumina abutment. These results suggest the zirconia/alumina composite abutment can be used in variable intraoral condition, in posterior segment as well as anterior segment without adverse effects.

Comparative accuracy of new implant impression technique using abutments as impression copings with an angulated implant model (경사지게 식립된 임플랜트 모형에서 지대주를 인상용 코핑으로 이용한 새로운 인상법의 정확성 비교 연구)

  • Lee, Hyeok-Jae;Kim, Chang-Whe;Lim, Young-Jun;Kim, Myung-Joo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.2
    • /
    • pp.201-208
    • /
    • 2008
  • Statement of problem: A new implant impression technique which use abutments as impression coping, and use resin cement as a splinting material was described. Accuracy of this technique was compared with conventional closed tray and resin splinted open tray technique for a $15^{\circ}$ angled 3-implant model Material and methods: A dental stone master model with 3 linearly positioned implant analogue and a reference framework which was passively fitted to it were fabricated. The center analogue was perpendicular to the plane of model and the outer analogues had a $15^{\circ}$angulation forward or backward. 10 closed tray impressions, 10 resin splinted open tray impressions, 10 abutment-resin framework cementation impressions and 10 abutment-metal framework cementation impressions were made with additional silicone material and poured with dental stone. A light microscope with image processing was used to record the vertical gap dimension between reference framework and analogue of duplicated cast made with each 4 impression techniques. Statistical analysis used one-way ANOVA with post-hoc tests Tukey test of .05 level of significance Results: Significant difference in the vertical gap dimension was found between closed tray technique; 74.3 (${\pm}33.4$)${\mu}m$ and resin splinted open tray technique, and two other new technique. (P<.05) Abutment-metal framework cementation technique;42.5 (${\pm}11.9$)${\mu}m$ was significantly different from resin splinted open tray technique. (P<.05) Abutmentresin framework cementation technique;51.0 (${\pm}14.1$)${\mu}m$ did not differ significantly from resin splinted open tray technique;50.3 (${\pm}16.9$)${\mu}m$. (P>.05) Conclusion: Within limitations of this study, the accuracy of implant level impressions of resin splinted open tray technique was superior to that of closed tray technique. A new technique using abutment and metal framework cementation was more accurate than resin splinted open tray technique.

A Comparative Study about Industrial Structure Feature between TL Carriers and LTL Carriers (구역화물운송업과 노선화물운송업의 산업구조 특성 비교)

  • 민승기
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.1
    • /
    • pp.101-114
    • /
    • 2001
  • Transportation enterprises should maintain constant and qualitative operation. Thus, in short period, transportation enterprises don't change supply in accordance with demand. In the result, transportation enterprises don't reduce operation in spite of management deficit at will. In freight transportation type, less-than-truckload(LTL) has more relation with above transportation feature than truckload(TL) does. Because freight transportation supply of TL is more flexible than that of LTL in correspondence of freight transportation demand. Relating to above mention, it appears that shortage of road and freight terminal of LTL is larger than that of TL. Especially in road and freight terminal comparison, shortage of freight terminal is larger than that of road. Shortage of road is the largest in 1990, and improved after-ward. But shortage of freight terminal is serious lately. So freight terminal needs more expansion than road, and shows better investment condition than road. Freight terminal expansion brings road expansion in LTL, on the contrary, freight terminal expansion substitutes freight terminal for road in TL. In transportation revenue, freight terminal's contribution to LTL is larger than that to TL. However, when we adjust quasi-fixed factor - road and freight terminal - to optimal level in the long run, in TL, diseconomies of scale becomes large, but in LTL, economies of scale becomes large. Consequently, it is necessary for TL to make counterplans to activate management of small size enterprises and owner drivers. And LTL should make use of economies of scale by solving the problem, such as nonprofit route, excess of rental freight handling of office, insufficiency of freight terminal, shortage of driver, and unpreparedness of freight insurance.

  • PDF

Can We Hear the Shape of a Noise Source\ulcorner (소음원의 모양을 들어서 상상할 수 있을까\ulcorner)

  • Kim, Yang-Hann
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.7
    • /
    • pp.586-603
    • /
    • 2004
  • One of the subtle problems that make noise control difficult for engineers is “the invisibility of noise or sound.” The visual image of noise often helps to determine an appropriate means for noise control. There have been many attempts to fulfill this rather challenging objective. Theoretical or numerical means to visualize the sound field have been attempted and as a result, a great deal of progress has been accomplished, for example in the field of visualization of turbulent noise. However, most of the numerical methods are not quite ready to be applied practically to noise control issues. In the meantime, fast progress has made it possible instrumentally by using multiple microphones and fast signal processing systems, although these systems are not perfect but are useful. The state of the art system is recently available but still has many problematic issues : for example, how we can implement the visualized noise field. The constructed noise or sound picture always consists of bias and random errors, and consequently it is often difficult to determine the origin of the noise and the spatial shape of noise, as highlighted in the title. The first part of this paper introduces a brief history, which is associated with “sound visualization,” from Leonardo da Vinci's famous drawing on vortex street (Fig. 1) to modern acoustic holography and what has been accomplished by a line or surface array. The second part introduces the difficulties and the recent studies. These include de-Dopplerization and do-reverberation methods. The former is essential for visualizing a moving noise source, such as cars or trains. The latter relates to what produces noise in a room or closed space. Another mar issue associated this sound/noise visualization is whether or not Ivecan distinguish mutual dependence of noise in space : for example, we are asked to answer the question, “Can we see two birds singing or one bird with two beaks?"

The Comparison of Characteristics of Korean, Chinese and Japanese Traditional Flower Arts Used in Royal Court Ceremonies (한국과 중국 및 일본의 궁중 전통 꽃꽂이 특징비교)

  • Hong, Hoon Ki;Lee, Jong Suk
    • FLOWER RESEARCH JOURNAL
    • /
    • v.18 no.2
    • /
    • pp.125-135
    • /
    • 2010
  • To discover the main characteristics of Korean traditional flower arrangement, they were compared with different articles and old paintings used in royal court ceremonies. The primary research involved principle of design. The times periods used were the Joseon Dynasty era of Korea, the Ming era of China, and the Edo eras of Japan. The result, which shows both the similarities and differences, of the research is summarized as follows. The similarities were that they all respect the features of nature, and their image expresses their creator's thinking. There was one technique, called 'Suje', in which a part of the stem is coming out from one branch. Also, each three eras preferred flowering trees and ornamental trees more than annuals or foliage plants. one of the differences was that korea used a simple number of materials. The work had volume and appeared mild by using a soft curved line which was repetitive and massive. The Joseon Dynasty era advanced a sense of beauty with artistic symmetry and balance. The work seemed soft and natural because of the little change in blank space, with almost no angle of line. The form had a characteristic preference of being taller than the typical Japanese arrangement. It appeared simple, calm, and rustic by using only one kind of material. In contrast, the Chinese style was gorgeous and displayed volume in a non-symmetrical tripodal form, which incorporated various colors and materials. Also, they avoided processing the materials in order to emphasize the original beauty of nature. Chinese flower arts did not become formalized because they did not consider the formality seriously the formal. The Japanese style was also gorgeous because it incorporated various materials and angles. It included an extreme technique in which an artificial line divided the blank space delicately. The line was both strong and delicate in an established form. The restriction of the main branch gave a light feeling, as well as more strain as in a balance sense. The Japanese eras emphasized more the use of line and a sense of blank space.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Simultaneous Removal of NO and SO2 using Microbubble and Reducing Agent (마이크로버블과 환원제를 이용한 습식 NO 및 SO2의 동시제거)

  • Song, Dong Hun;Kang, Jo Hong;Park, Hyun Sic;Song, Hojun;Chung, Yongchul G.
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.341-349
    • /
    • 2021
  • In combustion facilities, the nitrogen and sulfur in fossil fuels react with oxygen to generate air pollutants such as nitrogen oxides (NOX) and sulfur oxides (SOX), which are harmful to the human body and cause environmental pollution. There are regulations worldwide to reduce NOX and SOX, and various technologies are being applied to meet these regulations. There are commercialized methods to reduce NOX and SOX emissions such as selective catalytic reduction (SCR), selective non-catalytic reduction (SNCR) and wet flue gas desulfurization (WFGD), but due to the disadvantages of these methods, many studies have been conducted to simultaneously remove NOX and SOX. However, even in the NOX and SOX simultaneous removal methods, there are problems with wastewater generation due to oxidants and absorbents, costs incurred due to the use of catalysts and electrolysis to activate specific oxidants, and the harmfulness of gas oxidants themselves. Therefore, in this research, microbubbles generated in a high-pressure disperser and reducing agents were used to reduce costs and facilitate wastewater treatment in order to compensate for the shortcomings of the NOX, SOX simultaneous treatment method. It was confirmed through image processing and ESR (electron spin resonance) analysis that the disperser generates real microbubbles. NOX and SOX removal tests according to temperature were also conducted using only microbubbles. In addition, the removal efficiencies of NOX and SOX are about 75% and 99% using a reducing agent and microbubbles to reduce wastewater. When a small amount of oxidizing agent was added to this microbubble system, both NOX and SOX removal rates achieved 99% or more. Based on these findings, it is expected that this suggested method will contribute to solving the cost and environmental problems associated with the wet oxidation removal method.

Detection of Forest Fire Damage from Sentinel-1 SAR Data through the Synergistic Use of Principal Component Analysis and K-means Clustering (Sentinel-1 SAR 영상을 이용한 주성분분석 및 K-means Clustering 기반 산불 탐지)

  • Lee, Jaese;Kim, Woohyeok;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1373-1387
    • /
    • 2021
  • Forest fire poses a significant threat to the environment and society, affecting carbon cycle and surface energy balance, and resulting in socioeconomic losses. Widely used multi-spectral satellite image-based approaches for burned area detection have a problem in that they do not work under cloudy conditions. Therefore, in this study, Sentinel-1 Synthetic Aperture Radar (SAR) data from Europe Space Agency, which can be collected in all weather conditions, were used to identify forest fire damaged area based on a series of processes including Principal Component Analysis (PCA) and K-means clustering. Four forest fire cases, which occurred in Gangneung·Donghae and Goseong·Sokcho in Gangwon-do of South Korea and two areas in North Korea on April 4, 2019, were examined. The estimated burned areas were evaluated using fire reference data provided by the National Institute of Forest Science (NIFOS) for two forest fire cases in South Korea, and differenced normalized burn ratio (dNBR) for all four cases. The average accuracy using the NIFOS reference data was 86% for the Gangneung·Donghae and Goseong·Sokcho fires. Evaluation using dNBR showed an average accuracy of 84% for all four forest fire cases. It was also confirmed that the stronger the burned intensity, the higher detection the accuracy, and vice versa. Given the advantage of SAR remote sensing, the proposed statistical processing and K-means clustering-based approach can be used to quickly identify forest fire damaged area across the Korean Peninsula, where a cloud cover rate is high and small-scale forest fires frequently occur.