• 제목/요약/키워드: Rendering

Search Result 1,797, Processing Time 0.023 seconds

Design and fabrication of a high power LED searchlight (고출력 LED 탐조등의 설계 및 제작)

  • Kim, Se-Jin;Kim, Sun-Jae;Ha, Hee-Ju;Kil, Gyung-Suk;Kim, Il-Kwon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.38 no.6
    • /
    • pp.737-743
    • /
    • 2014
  • This paper dealt with a retrofit high power LED searchlight to replace conventional 1kW halogen searchlights. The design specification meets KDS 6230-1046-1 and KS V 8469. An optical lens with the beam angle of $6^{\circ}$ was used to meet the luminous intensity of 800,000cd at $0^{\circ}$ in horizontal line. Heat dissipation of the LED searchlight adopted a free air cooling type which does not use a fan or a heat-pipe. From the test results, power consumption of the prototype LED searchlight was 148W which was saved by 85% comparing a halogen searchlight of 1kW. Luminous intensity was 945,000cd at $0^{\circ}$ in horizontal line, satisfying KS V 8469. Luminous efficacy was improved by 4.7 times higher than that of the halogen searchlights. Beam angle, color temperature, and color rendering index(CRI) was $5.4^{\circ}$, 5,500K, and 70, respectively. Surface temperature of the LED searchlight was below $60^{\circ}C$ and surrounding temperature of the SMPS installed inside was below $50^{\circ}C$ which were satisfied with the IEC 60092-306.

Software Development for the Integrated Visualization of Brain Tumor and its Surrounding Fiber Tracts (뇌종양 및 그 주변 신경다발의 통합적 가시화를 위한 소프트웨어의 개발)

  • Oh Jungsu;Cho Ik Hwan;Na Dong Gyu;Chang Kee Hyun;Park Kwang Suk;Song In Chan
    • Investigative Magnetic Resonance Imaging
    • /
    • v.9 no.1
    • /
    • pp.2-8
    • /
    • 2005
  • Purpose : The purpose of this study was to implement a software to visualize tumor and its surrounding fiber tracts simultaneously using diffusion tensor imaging and examine the feasibility of our software for investigating the influence of tumor on its surrounding fiber connectivity. Material and Methods : MR examination including T1-weigted and diffusion tensor images of a patient with brain tumor was performed on a 3.0 T MRI unit. We used the skull-striped brain and segmented tumor images for volume/surface rendering and anatomical information from contrast-enhanced T1-weighted images. Diffusion tensor images for the white matter fiber-tractography were acquired using a SE-EPI with a diffusion scheme of 25 directions. Fiber-tractography was performed using the streamline and tensorline methods. To correct a spatial mismatch between T1-weighted and diffusion tensor images, they were coregistered using a SPM. Our software was implemented under window-based PC system. Results : We successfully implemented the integrated visualization of the fiber tracts with tube-like surfaces, cortical surface and the tumor with volume/surface renderings in a patient with brain tumor. Conclusion : Our result showed the feasibility of the integrated visualization of brain tumor and its surrounding fiber tracts. In addition, our implementation for integrated visualization can be utilized to navigate the brain for the quantitative analysis of fractional anisotropy to assess changes in the white matter tract integrity of edematic and peri-edematic regions in a number of tumor patients.

  • PDF

Animation Generation for Chinese Character Learning on Mobile Devices (모바일 한자 학습 애니메이션 생성)

  • Koo, Sang-Ok;Jang, Hyun-Gyu;Jung, Soon-Ki
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.12
    • /
    • pp.894-906
    • /
    • 2006
  • There are many difficulties to develop a mobile contents due to many constraints on mobile environments. It is difficult to make a good mobile contents with only visual reduction of existing contents on wire Internet. Therefore, it is essential to devise the data representation and to develop the authoring tool to meet the needs of the mobile contents market. We suggest the compact mobile contents to learn Chinese characters and developed its authoring tool. The animation which our system produces is realistic as if someone writes letters with pen or brush. Moreover, our authoring tool makes a user generate a Chinese character animation easily and rapidly although she or he has not many knowledge in computer graphics, mobile programming or Chinese characters. The method to generate the stroke animation is following: We take basic character shape information represented with several contours from TTF(TrueType Font) and get the information for the stroke segmentation and stroke ordering from simple user input. And then, we decompose whole character shape into some strokes by using polygonal approximation technique. Next, the stroke animation for each stroke is automatically generated by the scan line algorithm ordered by the stroke direction. Finally, the ordered scan lines are compressed into some integers by reducing coordinate redundancy As a result, the stroke animation of our system is even smaller than GIF animation. Our method can be extended to rendering and animation of Hangul or general 2D shape based on vector graphics. We have the plan to find the method to automate the stroke segmentation and ordering without user input.

A Real-time Single-Pass Visibility Culling Method Based on a 3D Graphics Accelerator Architecture (실시간 단일 패스 가시성 선별 기법 기반의 3차원 그래픽스 가속기 구조)

  • Choo, Catherine;Choi, Moon-Hee;Kim, Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.15A no.1
    • /
    • pp.1-8
    • /
    • 2008
  • An occlusion culling method, one of visibility culling methods, excludes invisible objects or triangles which are covered by other objects. As it reduces computation quantity, occlusion culling is an effective method to handle complex scenes in real-time. But an existing common occlusion culling method, such as hardware occlusion query method, sends objects' data twice to GPU and this causes processing overheads once for occlusion culling test and the other is for rendering. And another existing hardware occlusion culling method, VCBP, can test objects' visibility quickly, but it neither test bounding volume nor return test result to application stage. In this paper, we propose a single pass occlusion culling method which uses temporal and spatial coherency, with effective occlusion culling hardware architecture. In our approach, the hardware performs occlusion culling test rapidly with cache on the rasterization stage where triangles are transformed into fragments. At the same time, hardware sends each primitive's visibility information to application stage. As a result, the application stage reduces data transmission quantity by excluding covered objects using the visibility information on previous frame and hierarchical spatial tree. Our proposed method improved maximum 44%, minimum 14% compared with S&W method based on hardware occlusion query. And the performance is increased 25% and 17% respectively, compared to maximum and minimum performance of CHC method which is based on occlusion culling method.

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.

Manufacture of 3-Dimensional Image and Virtual Dissection Program of the Human Brain (사람 뇌의 3차원 영상과 가상해부 풀그림 만들기)

  • Chung, M.S.;Lee, J.M.;Park, S.K.;Kim, M.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.57-59
    • /
    • 1998
  • For medical students and doctors, knowledge of the three-dimensional (3D) structure of brain is very important in diagnosis and treatment of brain diseases. Two-dimensional (2D) tools (ex: anatomy book) or traditional 3D tools (ex: plastic model) are not sufficient to understand the complex structures of the brain. However, it is not always guaranteed to dissect the brain of cadaver when it is necessary. To overcome this problem, the virtual dissection programs of the brain have been developed. However, most programs include only 2D images that do not permit free dissection and free rotation. Many programs are made of radiographs that are not as realistic as sectioned cadaver because radiographs do not reveal true color and have limited resolution. It is also necessary to make the virtual dissection programs of each race and ethnic group. We attempted to make a virtual dissection program using a 3D image of the brain from a Korean cadaver. The purpose of this study is to present an educational tool for those interested in the anatomy of the brain. The procedures to make this program were as follows. A brain extracted from a 58-years old male Korean cadaver was embedded with gelatin solution, and serially sectioned into 1.4 mm-thickness using a meat slicer. 130 sectioned specimens were inputted to the computer using a scanner ($420\times456$ resolution, true color), and the 2D images were aligned on the alignment program composed using IDL language. Outlines of the brain components (cerebrum, cerebellum, brain stem, lentiform nucleus, caudate nucleus, thalamus, optic nerve, fornix, cerebral artery, and ventricle) were manually drawn from the 2D images on the CorelDRAW program. Multimedia data, including text and voice comments, were inputted to help the user to learn about the brain components. 3D images of the brain were reconstructed through the volume-based rendering of the 2D images. Using the 3D image of the brain as the main feature, virtual dissection program was composed using IDL language. Various dissection functions, such as dissecting 3D image of the brain at free angle to show its plane, presenting multimedia data of brain components, and rotating 3D image of the whole brain or selected brain components at free angle were established. This virtual dissection program is expected to become more advanced, and to be used widely through Internet or CD-title as an educational tool for medical students and doctors.

  • PDF

Estimation of Optimal Ecological Flowrate of Fish in Chogang Stream (초강천에서 어류의 최적 생태유량 산정)

  • Hur, Jun Wook;Kim, Dae Hee;Kang, Hyeongsik
    • Ecology and Resilient Infrastructure
    • /
    • v.1 no.1
    • /
    • pp.39-48
    • /
    • 2014
  • In order to establish fundamental data for stream restoration and environmental flow, we investigated optimal ecological flowrate (OEF) and riverine health condition in the Chogang Stream, a tributary to Geum River, Korea. The number of fish individuals sampled in this period were 4,669 in 36 species of 9 families. The most abundant species was Korean chub (Zacco koreanus, 34.0%) followed by pale chub (Z. platypus, 22.6%) and Korean shinner (Coreoleuciscus splendidus, 13.3%). Index of biological integrity (IBI) and qualitative habitat evaluation index (QHEI) values decreased from upstream to downstream along the stream. The estimated IBI value ranged from 27.9 to 38.6 with average 32.2 out of 50, rendering the site ecologically fair to good health conditions. OEF was estimated by the physical habitat simulation system (PHABSIM) using the habitat suitability indexes (HSI) of three fish species Z. koreanus, C. splendidus and Pseudopungtungia nigra selected as indicator species. In Z. koreanus, HSI for flow velocity and water depth were estimated at 0.1 to 0.4 m/s and 0.2 to 0.4 m, respectively. In P. nigra, HSI for flow velocity, water depth and substrate size were estimated at 0.2 to 0.5 m/s and 0.4 to 0.6 m and fine gravel to cobbles, respectively. OEF values increasing from up to downstream was found to increase, weighted usable area (WUA) values increased accordingly.

Effects of Type and Thickness of Flexible Packaging Films on Perforation by Plodia interpuntella (유연포장 필름의 종류 및 두께에 따른 화랑곡나방 침투율 연구)

  • Lee, Soo Hyun;Kwon, Sang-Jo;Lee, Sang Eun;Kim, Jeong-Heon;Lee, Jung-Soo;Na, Ja Hyun;Han, Jaejoon
    • Korean Journal of Food Science and Technology
    • /
    • v.46 no.6
    • /
    • pp.739-742
    • /
    • 2014
  • This study investigated the effect of perforation by the Indian meal moth (Plodia interpunctella) larvae on various flexible food-packaging films, in relation to their thickness and type. Among the various flexible packaging films, polyethylene (PE), aluminum foil (AF), polypropylene (PP), polystyrene (PS), and polyethylene terephthalate (PET) were selected for this study due to their wide usage in food packaging. Based on their thickness, film penetration by P. interpunctella larvae was measured as in following order: PP, $20{\mu}m$; AF, $9{\mu}m$; PET, $12{\mu}m$; PP, $30{\mu}m$; PS, $30{\mu}m$; PE, $40{\mu}m$; PE, $35{\mu}m$; PS, $60{\mu}m$; and PET, $16{\mu}m$. P. interpunctella larvae rapidly penetrated through the packaging films regardless of their thickness and type. In particular, it was observed that PP of $20{\mu}m$ and PS of $30{\mu}m$ were completely penetrated by P. interpunctella larvae within 72 h, rendering thin PP and PS films less valuable as anti-insect packaging films. Our results show that the perforations by P. interpunctella larvae were observed in the thin films. These results imply that each packaging film has a marginal thickness against the perforations by P. interpunctella larvae.

Evaluation for Cytopreservability of Manual Liquid-Based Cytology $Liqui-PREP^{TM}$ and its Application to Cerebrospinal Fluid Cytology: Comparative Study with Cytospin (수기 액상세포검사 $Liqui-PREP^{TM}$의 세포보존력 평가 및 뇌척수액 세포검사에의 적용: 세포원심분리법과의 비교)

  • Park, Gyeong-Sin;Lee, Kyung-Ji;Jung, Chan-Kwon;Lee, Dae-Hyoung;Cho, Bin;Lee, Youn-Soo;Shim, Sang-In;Lee, Kyo-Young;Kang, Chang-Suk
    • The Korean Journal of Cytopathology
    • /
    • v.18 no.1
    • /
    • pp.46-54
    • /
    • 2007
  • Cerebrospinal fluid (CSF) cytology is an effective tool for evaluating diseases involving the central nervous system, but this technique is usually limited by its low cellularity and poor cellular preservation. Here we compared the manual liquid-base $Liqui-PREP^{TM}$ (LP) to the cytospin (CS) with using a mononuclear cell suspension and we applied both methods to the CSFs of pediatric leukemia patients. The cytopresevability, in terms of cell yield and cell size, and the clinical efficacy were evaluated. When 2000 and 4000 mononuclear cells were applied, LP was superior to CS for the cell yield, 16.8% vs 1.7% (P=0.001) and 26.2% vs 3.5% (P=0.002), respectively. The mean size of the smeared cells was 10.60 ${\mu}m$ in the CS, 5.01 ${\mu}m$ in the LP and 6.50 ${\mu}m$ in the direct smear (DS), and the size ratio was 1.7 (CS to DS), 0.8(LP to DS) and 2.1 (CS to LP), respectively. As compared to the cells in the DS, the cells in the CS were significantly enlarged, but those in the LP were slightly shrunken. Upon application to 109 CSF samples, 4 were diagnosed as positive for leukemia (positive), 4 had atypical cells and 101 were negative by CS; 6 were positive, one had atypical cells and 102 were negative by LP. For six cases, in which 4 were positive for leukemia and 2 of 4 had atypical cells by CS, they were positive by LP and they were also confirmed as positive according to the follow-up study. Three cases diagnosed as atypical cells (two by CS and one by LP), were confirmed as negative. In conclusion, these results suggest that LP is superior to CS for the cytopresevability and for rendering a definite diagnosis of cerebrospinal fluid.