• Title/Summary/Keyword: the image of translator

Search Result 12, Processing Time 0.019 seconds

Opto-Mechanical Detailed Design of the G-CLEF Flexure Control Camera

  • Jae Sok Oh;Chan Park;Kang-Min Kim;Heeyoung Oh;UeeJeong Jeong;Moo-Young Chun;Young Sam Yu;Sungho Lee;Jeong-Gyun Jang;Bi-Ho Jang;Sung-Joon Park;Jihun Kim;Yunjong Kim;Andrew Szentgyorgyi;Stuart McMuldroch;William Podgorski;Ian Evans;Mark Mueller;Alan Uomoto;Jeffrey Crane;Tyson Hare
    • Journal of The Korean Astronomical Society
    • /
    • v.56 no.2
    • /
    • pp.169-185
    • /
    • 2023
  • The GMT-Consortium Large Earth Finder (G-CLEF) is the first instrument for the Giant Magellan Telescope (GMT). G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. G-CLEF Flexure Control Camera (FCC) is included as a part in G-CLEF Front End Assembly (GCFEA), which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within GCFEA. FCC consists of an optical bench on which five optical components are installed. The order of the optical train is: a collimator, neutral density filters, a focus analyzer, a reimager and a detector (Andor iKon-L 936 CCD camera). The collimator consists of a triplet lens and receives the beam reflected by a fiber mirror. The neutral density filters make it possible a broad range star brightness as a target or a guide. The focus analyzer is used to measure a focus offset. The reimager focuses the beam from the collimator onto the CCD detector focal plane. The detector module includes a linear translator and a field de-rotator. We performed thermoelastic stress analysis for lenses and their mounts to confirm the physical safety of the lens materials. We also conducted the global structure analysis for various gravitational orientations to verify the image stability requirement during the operation of the telescope and the instrument. In this article, we present the opto-mechanical detailed design of G-CLEF FCC and describe the consequence of the numerical finite element analyses for the design.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.