• Title/Summary/Keyword: Digital Vector

Search Result 697, Processing Time 0.031 seconds

Clinical Evaluation of Microreplantation in the Digital Amputation (수지절단손상에 대한 재접합술의 평가와 분석)

  • Lee, Tae-Hoon;Woo, Sang-Hyeon;Choi, See-Ho;Seul, Jung-Hyun
    • Journal of Yeungnam Medical Science
    • /
    • v.5 no.1
    • /
    • pp.23-32
    • /
    • 1988
  • Finger injuries are becoming more common with the increasing use of mechanical industrial and household appliances. Among the hand injuries, amputation is the serious disaster to the patient. Recently, application of microsurgical technique to the reattachment of ampuatated digits has been common clinical procedures. We performed microsurgical replantation to the 75 patients with 102 digits from march in 1986 to february in 1988. The following results were obtained. 1. The most common age distribution was third decade and male to female ratio was about 5:1. 2. The ratio of right to left hand was about 1:1 but the dominant to non-dominant hand was about 2:1. 3. The index finger was most commonly injured and the next was middle finger. 4. The most common type of the injuries was the crushing injury and the most common vector was a kind of pressor. 5. The anesthesia was performed in equal ratio between the general and regional anesthesia. 6. The survival rate of microreplantation to the injuries of the zone II was 77.8% and zone III was 80%. 7. The functional result after replantation at zone II was better than zone III. 8. Microreplantation was performed in any case of the type of the injury, the severity of crushing and the ischemic time, and the patients requirement was an important factor.

  • PDF

Generalization by LoD and Coordinate Transformation in On-the-demand Web Mapping (웹환경에서 LoD와 좌표변형에 의한 지도일반화)

  • Kim, Nam-Shin
    • Journal of the Korean association of regional geographers
    • /
    • v.15 no.2
    • /
    • pp.307-315
    • /
    • 2009
  • The purpose of map generalization is a method of map making to transmit the concise cartographic representation and geographic meaning. New generalization algorithm has been developed to be applied in the digital environments by the development of computer cartography. This study aims to look into possibilities of the multiscale mapping by generalization in application with the coordinate transformation and LoD(level of detail) in the web cartography. A method of the coordinate transformation is to improve a transmission of spatial data. Lod is a method which is making web map with selection spatial data by zoom level of users. Layers for test constructed contour line, stream network, the name of a place, a summit of mountain, and administrative office. The generalization was applied to zoom levels by scale for the linear and polygonal features using XML-Based scalable vector graphics(SVG). Resultantly, storage capacity of data was minimized 41% from 9.76mb to 4.08mb in SVG. Generalization of LoD was applied to map elements by stages of the zoom level. In the first stages of zoom level, the main name of places and administrative office, higher order of stream channels, main summit of mountain was represented, and become increase numbers of map elements in the higher levels. Results of this study can help to improve esthetic map and data minimization in web cartography, and also need to make an efforts to research an algorithm on the map generalization over the web.

  • PDF

Estimation of sea surface wind using Radarsat-1 SAR (RADARSAT-1 SAR자료를 이용한 해상풍 추정)

  • Yoon, Hong-Joo;Cho, Han-Keun;Kang, Heung-Soon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.227-230
    • /
    • 2007
  • If we use the microwave of SAR, we can observe on the ocean in spite of bad weather, day and night time. Sea surface images on the ocean of SAR have a lot of information on the atmospheric phenomena related to surface wind vector. Information of wind speed which is extracted from SAR images is used variously. Wind direction data and sigma nought value are put in the CMOD which can extract wind information in order to estimate sea surface wind from SAR images. Wind spectrum which is extracted from SAR always presents opposed two points of $180^{\circ}$ because of applying to 2D-FFT. These ambiguities should be decided by position of land, wind direction or numerical model. Previously, we converted into sigma nought after extracting Digital Number from RadarSat-1 SAR using ENVI4.0, thus, it took a long time because every process was manual. Therefore, we converted sigma nought by matlab code after making matlab code. After that, we are extracting wind direction from sigma nought. Now, to decide wind direction needs further study because wind direction has $180^{\circ}$ ambiguity.

  • PDF

Convergence Implementing Emotion Prediction Neural Network Based on Heart Rate Variability (HRV) (심박변이도를 이용한 인공신경망 기반 감정예측 모형에 관한 융복합 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.33-41
    • /
    • 2018
  • The purpose of this study is to develop more accurate and robust emotion prediction neural network (EPNN) model by combining heart rate variability (HRV) and neural network. For the sake of improving the prediction performance more reliably, the proposed EPNN model is based on various types of activation functions like hyperbolic tangent, linear, and Gaussian functions, all of which are embedded in hidden nodes to improve its performance. In order to verify the validity of the proposed EPNN model, a number of HRV metrics were calculated from 20 valid and qualified participants whose emotions were induced by using money game. To add more rigor to the experiment, the participants' valence and arousal were checked and used as output node of the EPNN. The experiment results reveal that the F-Measure for Valence and Arousal is 80% and 95%, respectively, proving that the EPNN yields very robust and well-balanced performance. The EPNN performance was compared with competing models like neural network, logistic regression, support vector machine, and random forest. The EPNN was more accurate and reliable than those of the competing models. The results of this study can be effectively applied to many types of wearable computing devices when ubiquitous digital health environment becomes feasible and permeating into our everyday lives.

A Study on the Construction of Indoor Spatial Information using a Terrestrial LiDAR (지상라이다를 이용한 지하철 역사의 3D 실내공간정보 구축방안 연구)

  • Go, Jong Sik;Jeong, In Hun;Shin, Han Sup;Choi, Yun Soo;Cho, Seong Kil
    • Spatial Information Research
    • /
    • v.21 no.3
    • /
    • pp.89-101
    • /
    • 2013
  • Recently, importance of indoor space is on the rise, as larger and more complex buildings are taking place due to development of building technology. Accordingly, range of the target area of spatial information service is rapidly expanding from outdoor space to indoor space. Various demands for indoor spatial information are expected to be created in the future through development of high technologies such as IT Mobile and convergence with various area. Thus this research takes a look at available methods for building indoor spatial information and then builds high accuracy three-dimensional indoor spatial information using indoor high accuracy laser survey and 3D vector process technique. The accuracy of built 3D indoor model is evaluated by overlap analysis method refer to a digital map, and the result showed that it could guarantee its positional accuracy within 0.04m on the x-axis, 0.06m on the y-axis. This result could be used as a fundamental data for building indoor spatial data and for integrated use of indoor and outdoor spatial information.

Low-Complexity H.264/AVC Deblocking Filter based on Variable Block Sizes (가변블록 기반 저복잡도 H.264/AVC 디블록킹 필터)

  • Shin, Seung-Ho;Doh, Nam-Keum;Kim, Tae-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.41-49
    • /
    • 2008
  • H.264/AVC supports variable block motion compensation, multiple reference images, 1/4-pixel motion vector accuracy, and in-loop deblocking filter, compared with the existing compression technologies. While these coding technologies are major functions of compression rate improvement, they lead to high complexity at the same time. For the H.264 video coding technology to be actually applied on low-end / low-bit rates terminals more extensively, it is essential to improve tile coding speed. Currently the deblocking filter that can improve the moving picture's subjective image quality to a certain degree is used on low-end terminals to a limited extent due to computational complexity. In this paper, a performance improvement method of the deblocking filter that efficiently reduces the blocking artifacts occurred during the compression of low-bit rates digital motion pictures is suggested. In the method proposed in this paper, the image's spatial correlational characteristics are extracted by using the variable block information of motion compensation; the filtering is divided into 4 modes according to the characteristics, and adaptive filtering is executed in the divided regions. The proposed deblocking method reduces the blocking artifacts, prevents excessive blurring effects, and improves the performance about $30{\sim}40%$ compared with the existing method.

Design of discriminant function for thick and thin coating from the white coating (백태 중 후태 및 박태 분류 판별함수 설계)

  • Choi, Eun-Ji;Kim, Keun-Ho;Ryu, Hyun-Hee;Lee, Hae-Jung;Kim, Jong-Yeol
    • Korean Journal of Oriental Medicine
    • /
    • v.13 no.3
    • /
    • pp.119-124
    • /
    • 2007
  • Introduction: In Oriental medicine, the status of tongue is the important indicator to diagnose one's health, because it represents physiological and clinicopathological changes of inner parts of the body. The method of tongue diagnosis is not only convenient but also non-invasive, so tongue diagnosis is most widely used in Oriental medicine. By the way, since tongue diagnosis is affected by examination circumstances a lot, its performance depends on a light source, degrees of an angle, a medical doctor's condition etc. Therefore, it is not easy to make an objective and standardized tongue diagnosis. In order to solve this problem, in this study, we tried to design a discriminant function for thick and thin coating with color vectors of preprocessed image. Method: 52 subjects, who were diagnosed as white-coated tongue, were involved. Among them, 45 subjects diagnosed as thin coating and 7 subjects diagnosed as thick coating by oriental medical doctors, and then their tongue images were obtained from a digital tongue diagnosis system. Using those acquired tongue images, we implemented two steps: Preprocessing and image analyzing. The preprocessing part of this method includes histogram equalization and histogram stretching at each color component, especially, intensity and saturation. It makes the difference between tongue substance and tongue coating was more visible, so that we can separate tongue coating easily. Next part, we analyzed the characteristic of color values and found the threshold to divide tongue area into coating area. Then, from tongue coating image, it is possible to extract the variables that were important to classify thick and thin coating. Result : By statistical analysis, two significant vectors, associated with G, were found, which were able to describe the difference between thick and thin coating very well. Using these two variables, we designed the discriminant function for coating classification and examined its performance. As a result, the overall accuracy of thick and thin coating classification was 92.3%. Discussion : From the result, we can expect that the discriminant function is applicable to other coatings in a similar way. Also, it can be used to make an objective and standardized diagnosis.

  • PDF

Program Design and Implementation for Efficient Application of Heterogeneous Spatial Data Using GMLJP2 Image Compression Technique (GMLJP2 영상압축 기술을 이용한 다양한 공간자료의 효율적인 활용을 위한 프로그램 설계 및 구현)

  • Kim, Yoon-Hyung;Yom, Jae-Hong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.5
    • /
    • pp.379-387
    • /
    • 2006
  • The real world is spatially modelled conceptually either as discrete objects or earth surface. The generated data models are then usually represented as vector and raster respectively. Although there are limited cases where only one data model is sufficient to solve the spatial problem at hand, it is now generally accepted that GIS should be able to handle various types of data model. Recent advances in spatial technology introduced even more variety of heterogeneous data models and the need is ever growing to handle and manage efficiently these large variety of spatial data. The OGC (Open GIS Consortium), an international organization pursuing standardization in the geospatial industry. recently introduced the GMLJP2 (Geographic Mark-Up Language JP2) format which enables store and handle heterogeneous spatial data. The GMLJP2 format, which is based on the JP2 format which is an abbreviation for JPEG2000 wavelet image compression format, takes advantage of the versatility of the GML capabilities to add extra data on top of the compressed image. This study takes a close look into the GMLJP2 format to analyse and exploit its potential to handle and mange hetergeneous spatial data. Aerial image, digital map and LIDAR data were successfully transformed end archived into a single GMLJP2 file. A simple viewing program was made to view the heterogeneous spatial data from this single file.

Automation of Building Extraction and Modeling Using Airborne LiDAR Data (항공 라이다 데이터를 이용한 건물 모델링의 자동화)

  • Lim, Sae-Bom;Kim, Jung-Hyun;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2009
  • LiDAR has capability of rapid data acquisition and provides useful information for reconstructing surface of the Earth. However, Extracting information from LiDAR data is not easy task because LiDAR data consist of irregularly distributed point clouds of 3D coordinates and lack of semantic and visual information. This thesis proposed methods for automatic extraction of buildings and 3D detail modeling using airborne LiDAR data. As for preprocessing, noise and unnecessary data were removed by iterative surface fitting and then classification of ground and non-ground data was performed by analyzing histogram. Footprints of the buildings were extracted by tracing points on the building boundaries. The refined footprints were obtained by regularization based on the building hypothesis. The accuracy of building footprints were evaluated by comparing with 1:1,000 digital vector maps. The horizontal RMSE was 0.56m for test areas. Finally, a method of 3D modeling of roof superstructure was developed. Statistical and geometric information of the LiDAR data on building roof were analyzed to segment data and to determine roof shape. The superstructures on the roof were modeled by 3D analytical functions that were derived by least square method. The accuracy of the 3D modeling was estimated using simulation data. The RMSEs were 0.91m, 1.43m, 1.85m and 1.97m for flat, sloped, arch and dome shapes, respectively. The methods developed in study show that the automation of 3D building modeling process was effectively performed.

A Feature Re-weighting Approach for the Non-Metric Feature Space (가변적인 길이의 특성 정보를 지원하는 특성 가중치 조정 기법)

  • Lee Robert-Samuel;Kim Sang-Hee;Park Ho-Hyun;Lee Seok-Lyong;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.372-383
    • /
    • 2006
  • Among the approaches to image database management, content-based image retrieval (CBIR) is viewed as having the best support for effective searching and browsing of large digital image libraries. Typical CBIR systems allow a user to provide a query image, from which low-level features are extracted and used to find 'similar' images in a database. However, there exists the semantic gap between human visual perception and low-level representations. An effective methodology for overcoming this semantic gap involves relevance feedback to perform feature re-weighting. Current approaches to feature re-weighting require the number of components for a feature representation to be the same for every image in consideration. Following this assumption, they map each component to an axis in the n-dimensional space, which we call the metric space; likewise the feature representation is stored in a fixed-length vector. However, with the emergence of features that do not have a fixed number of components in their representation, existing feature re-weighting approaches are invalidated. In this paper we propose a feature re-weighting technique that supports features regardless of whether or not they can be mapped into a metric space. Our approach analyses the feature distances calculated between the query image and the images in the database. Two-sided confidence intervals are used with the distances to obtain the information for feature re-weighting. There is no restriction on how the distances are calculated for each feature. This provides freedom for how feature representations are structured, i.e. there is no requirement for features to be represented in fixed-length vectors or metric space. Our experimental results show the effectiveness of our approach and in a comparison with other work, we can see how it outperforms previous work.