• 제목/요약/키워드: visual sensing system

검색결과 120건 처리시간 0.025초

DESIGNING AND DEVELOPING E-MAP COMPONENT USING UML

  • Jo Myung-Hee;Jo Yun-Won;Kim Dong-Young
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.466-469
    • /
    • 2005
  • In this study e-map component was designed and developed to possibly overlay with all kinds of thematic maps in various scales and provide the all detailed information by using high-resolution satellite image and GIS. Also, this system has powerful map composition tool to display map such as legend, scale bar, index map and so on. For this, this e-map component was designed by using UML and developed based on Windows 2000 and implemented by using Visual Basic 6.0 as development programming language, Map Objects 2.1 of ESRI as GIS component. Through this system, the forest officials could generate more detailed topography and desired thematic map. In addition, the data consistency in DBMS could be maintained by using SDE (Spatial Database Engine) for their job and share the standard forest database with others in real time.

  • PDF

Cloning of Rod Opsin Genes Isolated from Olive Flounder Paralichthys olivaceus, Japanese Eel Anguilla japonica, and Common Carp Cyprinus carpio

  • Kim, Sung-Wan;Kim, Jong-Myoung
    • Fisheries and Aquatic Sciences
    • /
    • 제12권4호
    • /
    • pp.265-275
    • /
    • 2009
  • G Protein-coupled receptors (GPCRs) mediating wide ranges of physiological responses is one of the most attractive targets for drug development. Rhodopsin, a dim-light photoreceptor, has been extensively used as a model system for structural and functional study of GPCRs. Fish have rhodopsin finely-tuned to their habitats where the intensity and the wavelength of lights are changed depending on its water-depth. To study the detailed molecular characteristics of GPCR architecture and to understand the fishery light-sensing system, genes encoding rod opsins were isolated from fishes living under different photic environments. Full-length rod opsin genes were obtained by combination of PCR amplification and DNA walking strategy of genomic DNA isolated from olive flounder, P. olivaceus, Japanese eel, A. japonica, and Common carp C. carpio. Deduced amino acid sequences showed a typical feature of rod opsins including the sites for Schiffs base formation (Lys296) and its counter ion (Glu113), disulfide formation (Cys110 and Cys187), and palmitoylation (Cys322 and Cys323) although Cys322 is replaced by Phe in Japanese eel. Comparison of opsins by amino acid sequence alignment indicated the closest similarity between P. olivaceus and H. hippoglossus (94%), A. japonica and A. anguilla (98%), and C. carpio and C. auratus (95%), respectively.

레이저 구조광을 이용한 로봇 목표 추적 방법 (Robot Target Tracking Method using a Structured Laser Beam)

  • 김종형;고경철
    • 제어로봇시스템학회논문지
    • /
    • 제19권12호
    • /
    • pp.1067-1071
    • /
    • 2013
  • A 3D visual sensing method using a laser structured beam is presented for robotic tracking applications in a simple and reliable manner. A cylindrical shaped laser structured beam is proposed to measure the pose and position of the target surface. When the proposed laser beam intersects on the surface along the target trajectory, an elliptic pattern is generated. Its ellipse parameters can be induced mathematically by the geometrical relationship of the sensor coordinate and target coordinate. The depth and orientation of the target surface are directly determined by the ellipse parameters. In particular, two discontinuous points on the ellipse pattern, induced by seam trajectory, indicate mathematically the 3D direction for robotic tracking. To investigate the performance of this method, experiments with a 6 axis robot system are conducted on two different types of seam trajectories. The results show that this method is very suitable for robot seam tracking applications due to its excellence in accuracy and efficiency.

Realistic Building Modeling from Sequences of Digital Images

  • Song, Jeong-Heon;Kim, Min-Suk;Han, Dong-Yeob;Kim, Yong-Il
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.516-516
    • /
    • 2002
  • With the wide usage of LiDAR data and high-resolution satellite image, 3D modeling of buildings in urban areas has become an important research topic in the photogrammetry and computer vision field for many years. However the previous modeling has its limitations of merely texturing the image to the DSM surface of the study area and does not represent the relief of building surfaces. This study is focused on presenting a system of realistic 3D building modeling from consecutive stereo image sequences using digital camera. Generally when acquiring images through camera, various parameters such as zooming, focus, and attitude are necessary to extract accurate results, which in certain cases, some parameters have to be rectified. It is, however, not always possible or practical to precisely estimate or rectify the information of camera positions or attitudes. In this research, we constructed the collinearity condition of stereo images through extracting the distinctive points from stereo image sequence. In addition, we executed image matching with Graph Cut method, which has a very high accuracy. This system successfully performed the realistic modeling of building with a good visual quality. From the study, we concluded that 3D building modeling of city area could be acquired more realistically.

  • PDF

마이크로 BGA 패키지의 볼 형상 시각검사를 위한 모아레 간섭계 기반 3차원 머신 비젼 시스템 (Three-dimensional Machine Vision System based on moire Interferometry for the Ball Shape Inspection of Micro BGA Packages)

  • 김민영
    • 마이크로전자및패키징학회지
    • /
    • 제19권1호
    • /
    • pp.81-87
    • /
    • 2012
  • 본 논문에서는 마이크로 BGA 패키지 내외부의 마이크로 볼의 3차원 형상을 측정하는 광학 측정 시스템을 제안하고 이를 구현한다. 대부분의 시각 검사 시스템은 마이크로 볼의 복잡한 반사 특성 때문에 검사에 어려움을 겪고 있다. 정확한 형상의 측정을 위해서, 특별히 설계된 시각 센서 시스템을 제안하고, 위상이송 모아레 간섭계의 측정원리에 기반한 형상측정 알고리즘을 제안한다. 센서 시스템은 4개의 서브시스템을 보유한 패턴 투사 시스템과 영상획득 시스템으로 구성된다. 패턴 투사용 서브시스템은 공간상으로 서로 상이한 투사 방향을 가지며, 이는 측정 물체에 각기 다른 입사 방향을 가지는 패턴 조명이 투사될 수 있도록 하는 것을 목적으로 한다. 위상이송 모아레 간섭계의 구현을 위한 정밀 위상이송을 위해서, 각 서브시스템의 패턴 격자는 PZT 구동기를 이용하여 일정 간격으로 이송한다. 최종적으로 측정되는 마이크로 볼의 경면반사와 그림자 영역을 효과적으로 제거하기 위해서, 다중 패턴 투사 시스템과 영상획득 시스템을 구현하고, 이를 테스트한다. 특히, 다중 프로젝션을 이용하여 획득되는 다중 높이 정보를 효과적으로 융합하기 위하여, 베이지안 센서 융합 이론을 기반으로한 센서 융합 알고리즘이 제안된다. 제안되는 시스템의 원리검증과 성능확인을 위해, 마이크로 BGA볼과 기판 범프의 측정대상물에 대해서, 측정 반복성을 중심으로 실험이 수행되었으며, 획득된 실험 결과를 분석하고 논의한다.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • 대한원격탐사학회지
    • /
    • 제39권1호
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

UN-REDD 기회비용 산정에서 위성영상 기반의 MRV 여건평가: 금강산을 사례로 (Evaluating MRV Potentials based on Satellite Image in UN-REDD Opportunity Cost Estimation: A Case Study for Mt. Geum-gang of North Korea)

  • 주승민;엄정섭
    • Spatial Information Research
    • /
    • 제22권3호
    • /
    • pp.47-58
    • /
    • 2014
  • 삼림전용 축소를 통한 온실가스 흡수량의 검증 및 인증(MRV: Measurement, Reporting, Verification)이 REDD의 기회비용 산정과정에서 핵심요건으로 부각되고 있다. 본 연구의 목적은 북한 금강산을 사례지역으로 UN-REDD 기회비용 산정과정에서 위성영상을 활용한 MRV의 잠재력을 파악하고, MRV과정에서 발생할 수 있는 입증책임에 관련된 쟁점들을 사전에 점검하는 데 있다. UN-REDD 기회비용을 산정하는 과정에서, MRV에 필요한 지표를 도출하고 위성영상을 활용하여 역사적 벌채율, 토지이용, 토지피복, 탄소저장량 등에 대한 정보의 수집여부를 평가하였다. 위성영상의 육안판독은 금강산의 MRV 여건(산림면적, 산림의 황폐화 추세 등)을 대, 중, 소 3단계의 분류체계에 의거하여 가시적인 기록으로 제시하였다. 위성영상이 국제사법재판소, UN, UN-REDD 등에서 증거자료로 인정되기 때문에 기회비용 산정과정에서 법적으로 구속력을 가진 증빙자료로 활용될 수 있을 것으로 판단된다. 현지조사와 문헌조사를 활용한 MRV에 대해서도 측정자료 확보에 대한 불확실성과 불안으로 인하여 북한의 REDD에 대한 활발한 투자가 어렵게 되고, 북한의 산림보전에 관련된 정부 기업 개인들과 거래하는 것을 꺼려할 정도로 대안이 없는 것이 아니라는 것이 확인되었다. 본 연구를 통해 도출된 결과는 북한에서 REDD사업을 하려는 남측 기업과 GCF(녹색기후기금, Green Climate Fund)를 비롯한 탄소무역 분야에서 실무를 수행하는 관계자들에게 구체적인 참고자료로 활용될 수 있을 것으로 판단된다.

스마트 퍽 시스템 : 디지털 정보의 물리적인 조작을 제공하는 실감 인터페이스 기술 (SmartPuck System : Tangible Interface for Physical Manipulation of Digital Information)

  • 김래현;조현철;박세형
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제13권4호
    • /
    • pp.226-230
    • /
    • 2007
  • 현재 가장 일반적인 데스크 탑 PC환경에서는 정보 입력을 위해 키보드와 마우스가 사용되고 출력장치로는 시각적으로 정보를 보여주는 모니터를 기반으로 하고 있다. 이런 환경에서는 디지털 정보를 다루기 위해서는 가상의 마우스를 움직여 모니터 상의 원하는 그래픽 아이콘을 선택해야 한다. 이때 가상의 커서는 테이블 위의 물리적인 마우스의 상대적인 움직임을 표현해 준다. 이런 데스크탑 메타포는 사용자에게 물리적인 감각을 통한 직관적인 인터페이스를 제공하지 못한다. 본 논문에서는 사용자가 "스마트퍽"이라는 물리적 도구를 가지고 컴퓨터와 상호작용 할 수 있는 실감인터페이스를 소개하고자 한다. 스마트퍽 시스템은 인간의 아날로그적인 지각과 반응 그리고 컴퓨터상의 디지털 정보와의 거리를 줄여주는 역할을 한다. 이 시스템은 PDP기반의 테이블 디스플레이 장치와 그 위에서 보여지는 정보와 직접적이고 직관적인 인터렉션이 가능한 물리적인 도구인 스마트 퍽, 그리고 스마트 퍽의 위치 추적 장치 등으로 구성되었다. 마지막으로 이러한 시스템을 적용한 예들을 보여주고자 한다.

A study on aerial triangulation from multi-sensor imagery

  • Lee, Young-ran;Habib, Ayman;Kim, Kyung-Ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.400-406
    • /
    • 2002
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is performed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with frame imagery and vise versa. The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.

  • PDF