• Title/Summary/Keyword: 편광성능

Search Result 75, Processing Time 0.019 seconds

Vehicle Visible Light Communication System Utilizing Optical Noise Mitigation Technology (광(光)잡음 저감 기술을 이용한 차량용 가시광 통신시스템)

  • Nam-Sun Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.413-419
    • /
    • 2023
  • Light Emitting Diodes(LEDs) are widely utilized not only in lighting but also in various applications such as mobile phones, automobiles, displays, etc. The integration of LED lighting with communication, specifically Visible Light Communication(VLC), has gained significant attention. This paper presents the direct implementation and experimentation of a Vehicle-to-Vehicle(V2V) Visible Light Communication system using commonly used red and yellow LEDs in typical vehicles. Data collected from the leading vehicle, including positional and speed information, were modulated using Non-Return-to-Zero On-Off Keying(NRZ-OOK) and transmitted through the rear lights equipped with red and yellow LEDs. A photodetector(PD) received the visible light signals, demodulated the data, and restored it. To mitigate the interference from fluorescent lights and natural light, a PD for interference removal was installed, and an interference removal device using a polarizing filter and a differential amplifier was employed. The performance of the proposed visible light communication system was analyzed in an ideal case, indoors and outdoors environments. In an outdoor setting, maintaining a distance of approximately 30[cm], and a transmission rate of 4800[bps] for inter-vehicle data transmission, the red LED exhibited a performance improvement of approximately 13.63[dB], while the yellow LED showed an improvement of about 11.9[dB].

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Design and Analysis of a Laser Lift-Off System using an Excimer Laser (엑시머 레이저를 사용한 LLO 시스템 설계 및 분석)

  • Kim, Bo Young;Kim, Joon Ha;Byeon, Jin A;Lee, Jun Ho;Seo, Jong Hyun;Lee, Jong Moo
    • Korean Journal of Optics and Photonics
    • /
    • v.24 no.5
    • /
    • pp.224-230
    • /
    • 2013
  • Laser Lift-Off (LLO) is a process that removes a GaN or AIN thin layer from a sapphire wafer to manufacture vertical-type LEDs. It consists of a light source, an attenuator, a mask, a projection lens and a beam homogenizer. In this paper, we design an attenuator and a projection lens. We use the 'ZEMAX' optical design software for analysis of depth of focus and for a projection lens design which makes $7{\times}7mm^2$ beam size by projecting a beam on a wafer. Using the 'LightTools' lighting design software, we analyze the size and uniformity of the beam projected by the projection lens on the wafer. The performance analysis found that the size of the square-shaped beam is $6.97{\times}6.96mm^2$, with 91.8 % uniformity and ${\pm}30{\mu}m$ focus depth. In addition, this study performs dielectric coating using the 'Essential Macleod' to increase the transmittance of an attenuator. As a result, for 23 layers of thin films, the transmittance total has 10-96% at angle of incidence $45-60^{\circ}$ in S-polarization.

The effect analysis of birefringence of plastic f$\heta$ Iens on the beam diameter (플라스틱 f$\heta$렌즈의 복굴절이 결상빔경에 미치는 영향분석)

  • 임천석
    • Korean Journal of Optics and Photonics
    • /
    • v.11 no.2
    • /
    • pp.73-79
    • /
    • 2000
  • We measure a beam diameter of scan and sub-scan direction of LSD (Laser Scanning Urnt) which uses $fheta$ lens produced by injecLion molding method as a scanning lens. While the measured beam diameter in scan direction, which is $62muextrm{m}$ to $68\mu\textrm{m}$, shows similar size comparing to the design beam diameter, the sub-scan beam diameter shows sIzable beam diameter deviation as much as 37 11m ranging from $78\mu\textrm{m}$ to $115\mu\textrm{m}$. Injection molding lens has the surface figure error due to the shrinkage III the cooling time and the internal distortion (birefringence) due to the uneven cooling conditIOn so that these bring about wavefront aberration (i.e., the enlargement of beam size), and are eventually expre~sed as the deterioration of the pdnting image. In this paper. we first measure and analyze beam diameter, birefringence (polanzation ratio), and asphedcal figure error of mIens in order to know the principle cause of the beam diameter deviation in sub-scan directIOn. And Lhen. through the analysis of a designed depth of focus and a calculated field curvature (imaging position of the optical axis directIon) using the above figure elTor data, we know Lhat the birefringence IS the main factor of sizable beam diameter deVIation in sub-scan direction. ction.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.