• Title/Summary/Keyword: grid size

Search Result 718, Processing Time 0.029 seconds

A study on the Nano Wire Grid Polarizer Film by Magnetic Soft Mold (Magnetic soft mold를 이용한 나노 와이어 그리드 편광 필름 연구)

  • Jo, Sang-Uk;Chang, Sunghwan;Choi, Doo-Sun;Huh, Seok-Hwan;Jeong, Myung Yung
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.21 no.2
    • /
    • pp.85-89
    • /
    • 2014
  • We propose the new fabrication method of a 70 nm half-pitch wire grid polarizer with high performance using magnetic soft mold. The device is a form of aluminium gratings on a PET(Polyethylene phthalate) substrate whose size of $3cm{\times}3cm$ is compatible with a TFT_LCD(Tin Flat Transistor Liquid Crystal Display) panel. A magnetic soft mold with a pitch of 70 nm is fabricated using two-step replication method. As a result, we get a NWGP pattern which has 70.39 nm line width, 64.76 nm depth, 140.78 nm pitch, on substrate. The maximum and minimum transmittances of the NWGP at 800 nm are 75% and 10%, respectively. This work demonstrates a unique cost-effective solution for nanopatterning requirements in consumer electronics components.

An Filtering Automatic Technique of LiDAR Data by Multiple Linear Regression Analysis (다중선형 회귀분석에 의한 LiDAR 자료의 필터링 자동화 기법)

  • Choi, Seung-Pil;Cho, Ji-Hyun;Kim, Jun-Seong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.4
    • /
    • pp.109-118
    • /
    • 2011
  • In this research estimated accuracies that were results in all the area of filtering of the plane equation that was used by whole data set, and regional of filtering that was driven by the plane equation for each vertual Grid. All of this estimates were based by all the area of filtering that deduced the plane equation by multiple linear regression analysis that was used by ground data set. Therefore, accuracy of all the area of filtering that used whole data set has been dropped about 2~3% when average of accuracy of all the area of filtering was based on ground data set while accuracy of Regional of filtering dropped 2~4% when based on virtual Grid. Moreover, as virtual Grid which was set 3~4 cm was difference about 2% of accuracy from standard data. Thus, it leads conclusion of set 3~4 times bigger size in virtual Grid filtering over LiDAR scan gap will be more appropriated. Hence, the result of this research allow us to conclude that there was difference in average accuracy has been noticed when we applied each different approaches, I strongly suggest that it need to research more about real topography for further filtering accuracy.

SURFACE RECONSTRUCTION FROM SCATTERED POINT DATA ON OCTREE

  • Park, Chang-Soo;Min, Cho-Hon;Kang, Myung-Joo
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.16 no.1
    • /
    • pp.31-49
    • /
    • 2012
  • In this paper, we propose a very efficient method which reconstructs the high resolution surface from a set of unorganized points. Our method is based on the level set method using adaptive octree. We start with the surface reconstruction model proposed in [20]. In [20], they introduced a very fast and efficient method which is different from the previous methods using the level set method. Most existing methods[21, 22] employed the time evolving process from an initial surface to point cloud. But in [20], they considered the surface reconstruction process as an elliptic problem in the narrow band including point cloud. So they could obtain very speedy method because they didn't have to limit the time evolution step by the finite speed of propagation. However, they implemented that model just on the uniform grid. So they still have the weakness that it needs so much memories because of being fulfilled only on the uniform grid. Their algorithm basically solves a large linear system of which size is the same as the number of the grid in a narrow band. Besides, it is not easy to make the width of band narrow enough since the decision of band width depends on the distribution of point data. After all, as far as it is implemented on the uniform grid, it is almost impossible to generate the surface on the high resolution because the memory requirement increases geometrically. We resolve it by adapting octree data structure[12, 11] to our problem and by introducing a new redistancing algorithm which is different from the existing one[19].

Fabrication of a Nano-Wire Grid Polarizer for Brightness Enhancement in TFT-LCD Display (TFT-LCD용 휘도 성능을 향상시키는 나노 와이어 그리드 편광 필름의 제작)

  • Huh, Jong-Wook;Nam, Su-Yong
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.29 no.3
    • /
    • pp.105-124
    • /
    • 2011
  • TFT-LCD consists of LCD panel on the top, circuit unit on the side and BLU on the bottom. The recent development issues of BLU-dependent TFT-LCD have been power consumption minimization, slimmerization and size maximization. As a result of this trend, LED is adopted as BLU instead of CCFL to increase brightness and to reduce thickness. In liquid crystal displays, the light efficiency is below 10% due to the loss of light in the path from a light source to an LCD panel and presence of absorptive polarizer. This low efficiency results in low brightness and high power consumption. One way to circumvent this situation is to use a reflective polarizer between backlight units and LCD panels. Since a nano-wire grid polarizer has been known as a reflective polarizer, an idea was proposed that it can be used for the enhancement of the brightness of LCD. The use of reflective polarizing film is increasing as edge type LED TV and 3D TV markets are growing. This study has been carried out to fabrication of the nano-wire grid polarizer(NWGP) and investigated the brightness enhancement of LCD through polarization recycling by placing a NWGP between an c and a backlight unit. NWGPs with a pitch of 200nm were fabricated using laser interference lithography and aluminum sputtering and wet etching. And The NWGP fabrication process was using by the UV imprinting and was applied to plastic PET film. In this case, the brightness of an LCD with NWGPs was 1.21 times higher than that without NWGPs due to polarization recycling.

Towards efficient sharing of encrypted data in cloud-based mobile social network

  • Sun, Xin;Yao, Yiyang;Xia, Yingjie;Liu, Xuejiao;Chen, Jian;Wang, Zhiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1892-1903
    • /
    • 2016
  • Mobile social network is becoming more and more popular with respect to the development and popularity of mobile devices and interpersonal sociality. As the amount of social data increases in a great deal and cloud computing techniques become developed, the architecture of mobile social network is evolved into cloud-based that mobile clients send data to the cloud and make data accessible from clients. The data in the cloud should be stored in a secure fashion to protect user privacy and restrict data sharing defined by users. Ciphertext-policy attribute-based encryption (CP-ABE) is currently considered to be a promising security solution for cloud-based mobile social network to encrypt the sensitive data. However, its ciphertext size and decryption time grow linearly with the attribute numbers in the access structure. In order to reduce the computing overhead held by the mobile devices, in this paper we propose a new Outsourcing decryption and Match-then-decrypt CP-ABE algorithm (OM-CP-ABE) which firstly outsources the computation-intensive bilinear pairing operations to a proxy, and secondly performs the decryption test on the attributes set matching access policy in ciphertexts. The experimental performance assessments show the security strength and efficiency of the proposed solution in terms of computation, communication, and storage. Also, our construction is proven to be replayable choosen-ciphertext attacks (RCCA) secure based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model.

Bottleneck link bandwidth Measurement Algorithm for improving end-to-end transit delay in Grid network (그리드 네트워크에서 종단간 전송 지연 향상을 위한 bottleneck 링크 대역폭 측정 알고리즘)

  • Choi, Won-Seok;Ahn, Seong-Jin;Chung, Jin-Wook
    • The KIPS Transactions:PartC
    • /
    • v.10C no.7
    • /
    • pp.923-928
    • /
    • 2003
  • This paper proposes a bottleneck link bandwidth measurement algorithm for reducing packet transmission delay within the grid network. There are two methods for measuring bottleneck link bandwidth:Packet Pair algorithm and Paced Probes algorithm. They measure bottleneck link bandwidth using the difference in arrival times of two paced probe packets of the same size traveling from the same source to destination. In addition, they reduce the influences of cross traffic by pacer packet. But there are some problems on these algorithms:it's not possible to know where bottleneck link occurred because they only focus on measuring the smallest link bandwidth along the path without considering bandwidth of every link on the path. So hop-by-hop based bottleneck link bandwidth measurement algorithm can be used for reducing packet transmission delay on grid network. Timestamp option was used on the paced probe packet for the link level measurement of bottleneck bandwidth. And the reducing of packet transmission delay was simulated by the solving a bottleneck link. The algorithm suggested in this paper can contribute to data transmission ensuring FTP and realtime QoS by detecting bandwidth and the location where bottleneck link occurred.

Effect of Domain Size on Flow Characteristics in Simulating Periodic Obstacle Flow (주기적인 경계조건을 사용하는 수치모사에서 계산영역 크기의 영향)

  • Choi, Choon-Bum;Jang, Yong-Jun;Kim, Jin-Ho;Han, Seok-Youn;Yang, Kyung-Soo
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.33 no.5
    • /
    • pp.349-357
    • /
    • 2009
  • Effect of computational domain size in simulating of periodic obstacle flow has been investigated for the flow past tube banks. Reynolds number, defined by freestream velocity ($U_{\infty}$) and cylinder diameter (d), was fixed as 200, and center-to-center distance (P) as 1.5d. In-line square array and staggered square array were considered. Drag coefficient, lift coefficient and Strouhal number were calculated depending on domain size. Circular cylinders were implemented on a Cartesian grid system by using an immersed boundary method. Boundary condition is periodic in both streamwise and lateral directions. Previous studies in literature often use a square domain with a side length of P, which contains only one cylinder. However, this study reveals that the domain size is improper. Especially, RMS values of flow-induced forces are most sensitive to the domain size.

Effects of DEM Resolution on Hydrological Simulation in, BASINS-BSPF Modeling

  • Jeon, Ji-Hong;Ham, Jong-Hwa;Chun G. Yoon;Kim, Seong-Joon
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.44 no.7
    • /
    • pp.25-35
    • /
    • 2002
  • In this study, the effect of DEM (Digital Elevation Model) resolution (15m, 30m, 50m, 70m, 100m, 200m, 300m) on the hydrological simulation was examined using the BASINS (Better Assessment Science Integrating point and Nonpoint Source) for the Heukcheon watershed (303.3 ㎢) data from 1998 to 1999. Generally, as the cell size of DEM increased, topographical changes were observed as the original range of elevation decreased. The processing time of watershed delineation and river network needed more time and effort on smaller cell size of DEM. The larger DEM demonstrated had some errors in the junction of river network which might affect on the simulation of water quantity and quality. The area weighted average watershed slope became milder but the length weighted average channel slope became steeper as the DEM size increased. DEM resolution affected substantially on the topographical parameter but less on the hydrological simulation. Considering processing time and accuracy on hydrological simulation, DEM grid size of 100m is recommended for this range of watershed size.

New Bubble Size Distribution Model for Cryogenic High-speed Cavitating Flow

  • Ito, Yutaka;Tomitaka, Kazuhiro;Nagasaki, Takao
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 2008.03a
    • /
    • pp.700-710
    • /
    • 2008
  • A Bubble size distribution model has been developed for the numerical simulation of cryogenic high-speed cavitating flow of the turbo-pumps in the liquid fuel rocket engine. The new model is based on the previous one proposed by the authors, in which the bubble number density was solved as a function of bubble size at each grid point of the calculation domain by means of Eulerian framework with respect to the bubble size coordinate. In the previous model, the growth/decay of bubbles due to pressure difference between bubble and liquid was solved exactly based on Rayleigh-Plesset equation. However, the unsteady heat transfer between liquid and bubble, which controls the evaporation/condensation rate, was approximated by a theoretical solution of unsteady heat conduction under a constant temperature difference. In the present study, the unsteady temperature field in the liquid around a bubble is also solved exactly in order to establish an accurate and efficient numerical simulation code for cavitating flows. The growth/decay of a single bubble and growth of bubbles with nucleation were successfully simulated by the proposed model.

  • PDF

Feasibility Study on the Optimization of Offsite Consequence Analysis by Particle Size Distribution Setting and Multi-Threading (입자크기분포 설정 및 멀티스레딩을 통한 소외사고영향분석 최적화 타당성 평가)

  • Seunghwan Kim;Sung-yeop Kim
    • Journal of the Korean Society of Safety
    • /
    • v.39 no.1
    • /
    • pp.96-103
    • /
    • 2024
  • The demand for mass calculation of offsite consequence analysis to conduct exhaustive single-unit or multi-unit Level 3 PSA is increasing. In order to perform efficient offsite consequence analyses, the Korea Atomic Energy Research Institute is conducting model optimization studies to minimize the analysis time while maintaining the accuracy of the results. A previous study developed a model optimization method using efficient plume segmentation and verified its effectiveness. In this study, we investigated the possibility of optimizing the model through particle size distribution setting by checking the reduction in analysis time and deviation of the results. Our findings indicate that particle size distribution setting affects the results, but its effect on analysis time is insignificant. Therefore, it is advantageous to set the particle size distribution as fine as possible. Furthermore, we evaluated the effect of multithreading and confirmed its efficiency. Future optimization studies should be conducted on various input factors of offsite consequence analysis, such as spatial grid settings.