• Title/Summary/Keyword: Computer model

Search Result 14,809, Processing Time 0.041 seconds

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

A Graph Layout Algorithm for Scale-free Network (척도 없는 네트워크를 위한 그래프 레이아웃 알고리즘)

  • Cho, Yong-Man;Kang, Tae-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.202-213
    • /
    • 2007
  • A network is an important model widely used in natural and social science as well as engineering. To analyze these networks easily it is necessary that we should layout the features of networks visually. These Graph-Layout researches have been performed recently according to the development of the computer technology. Among them, the Scale-free Network that stands out in these days is widely used in analyzing and understanding the complicated situations in various fields. The Scale-free Network is featured in two points. The first, the number of link(Degree) shows the Power-function distribution. The second, the network has the hub that has multiple links. Consequently, it is important for us to represent the hub visually in Scale-free Network but the existing Graph-layout algorithms only represent clusters for the present. Therefor in this thesis we suggest Graph-layout algorithm that effectively presents the Scale-free network. The Hubity(hub+ity) repulsive force between hubs in suggested algorithm in this thesis is in inverse proportion to the distance, and if the degree of hubs increases in a times the Hubity repulsive force between hubs is ${\alpha}^{\gamma}$ times (${\gamma}$??is a connection line index). Also, if the algorithm has the counter that controls the force in proportion to the total node number and the total link number, The Hubity repulsive force is independent of the scale of a network. The proposed algorithm is compared with Graph-layout algorithm through an experiment. The experimental process is as follows: First of all, make out the hub that exists in the network or not. Check out the connection line index to recognize the existence of hub, and then if the value of connection line index is between 2 and 3, then conclude the Scale-free network that has a hub. And then use the suggested algorithm. In result, We validated that the proposed Graph-layout algorithm showed the Scale-free network more effectively than the existing cluster-centered algorithms[Noack, etc.].

A study on the implementation of Medical Telemetry systems using wireless public data network (무선공중망을 이용한 의료 정보 데이터 원격 모니터링 시스템에 관한 연구)

  • 이택규;김영길
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.278-283
    • /
    • 2000
  • As information communication technology developed we could check our blood pressure, pulsation electrocardiogram, SpO2 and blood test easily at home. To check our health at ordinary times is able though interlocking the house medical instrument with the wireless public data network This service will help the inconvenience to visit the hospital everytime and will save the individual's time and cost. In each house an organism data which is detected from the human body will be transmitted to the distance hospital and will be essentially applied through wireless public data network The medical information transmit system is utilized by wireless close range network It would transmit the obtained organism signal wirelessly from the personal device to the main center system in the hospital. Remote telemetry system is embodied by utilizing wireless media access protocol. The protocol is embodied by grafting CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance) protocol falling mode which is standards from IEEE 802.11. Among the house care telemetry system which could measure blood pressure, pulsation, electrocardiogram, SpO2 the study embodies the ECC(electrocardiograph) measure part. It within the ECC function into the movable device and add 900㎒ band wireless public data interface. Then the aged, the patients even anyone in the house could obtain ECG and keep, record the data. It would be essential to control those who had a health-examination heart diseases or more complicated heart diseases and to observe the latent heart disease patient continuously. To embody the medical information transmit system which is based on wireless network. It would transmit the ECG data among the organism signal data which would be utilized by wireless network modem and NCL(Native Control Language) protocol to contact through wireless network Through the SCR(Standard Context Routing) protocol in the network it will be connected to the wired host computer. The computer will check the recorded individual information and the obtained ECC data then send the correspond examination to the movable device. The study suggests the medical transmit system model utilized by the wireless public data network.

  • PDF

A Study on the Precise Lineament Recovery of Alluvial Deposits Using Satellite Imagery and GIS (충적층의 정밀 선구조 추출을 위한 위성영상과 GIS 기법의 활용에 관한 연구)

  • 이수진;석동우;황종선;이동천;김정우
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.363-368
    • /
    • 2003
  • We have successfully developed a more effective algorithm to extract the lineament in the area covered by wide alluvial deposits characterized by a relatively narrow range of brightness in the Landsat TM image, while the currently used algorithm is limited to the mountainous areas. In the new algorithm, flat areas mainly consisting of alluvial deposits were selected using the Local Enhancement from the Digital Elevation Model (DEM). The aspect values were obtained by 3${\times}$3 moving windowing of Zevenbergen & Thorno's Method, and then the slopes of the study area were determined using the aspect values. After the lineament factors in the alluvial deposits were revealed by comparing the threshold values, the first rank lineament under the alluvial deposits were extracted using the Hough transform In order to extract the final lineament, the lowest points under the alluvial deposits in a given topographic section perpendicular to the first rank lineament were determined through the spline interpolation, and then the final lineament were chosen through Hough transform using the lowest points. The algorithm developed in this study enables us to observe a clearer lineament in the areas covered by much larger alluvial deposits compared with the results extracted using the conventional existing algorithm. There exists, however, some differences between the first rank lineament, obtained using the aspect and the slope, and the final lineament. This study shows that the new algorithm more effectively extracts the lineament in the area covered with wide alluvlal deposits than in the areas of converging slope, areas with narrow alluvial deposits or valleys.

  • PDF

Semantic Access Path Generation in Web Information Management (웹 정보의 관리에 있어서 의미적 접근경로의 형성에 관한 연구)

  • Lee, Wookey
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.2
    • /
    • pp.51-56
    • /
    • 2003
  • The structuring of Web information supports a strong user side viewpoint that a user wants his/her own needs on snooping a specific Web site. Not only the depth first algorithm or the breadth-first algorithm, but also the Web information is abstracted to a hierarchical structure. A prototype system is suggested in order to visualize and to represent a semantic significance. As a motivating example, the Web test site is suggested and analyzed with respect to several keywords. As a future research, the Web site model should be extended to the whole WWW and an accurate assessment function needs to be devised by which several suggested models should be evaluated.

  • PDF

A Study on Evaluation of Visual Factor for Measuring Subjective Virtual Realization (주관적인 가상 실감화 측정 방법에 대한 시각적 요소 평가 연구)

  • Won, Myeung-Ju;Park, Sang-In;Kim, Chi-Jung;Lee, Eui-Chul;Whang, Min-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.15 no.3
    • /
    • pp.389-398
    • /
    • 2012
  • Virtual worlds have pursued reality as if they actually exist. In order to evaluate the sense of reality in the computer-simulated worlds, several subjective questionnaires, which include specific independent variables, have been proposed in the literature. However, the questionnaires lack reliability and validity necessary for defining and measuring the virtual realization. Few studies have been conducted to investigate the effect of visual factors on the sense of reality experienced by exposing to a virtual environment. Therefore, this study was aimed at reinvestigating the variables and proposing a more reliable and advisable questionnaire for evaluating the virtual realization, focusing on visual factors. Twenty-one questions were gleaned from the literature and subjective interviews with focused groups. Exploratory factor analysis with oblique rotation was performed on the data obtained from 200 participants(females: 100) after exposing to a virtual character image described in an extreme way. After removing poorly loading items, remained subsets were subjected to confirmatory factor analysis on the data obtained from the same participants. As a result, 3 significant factors were determined to efficiently measure the virtual realization. The determined factors included visual presence(3 subset items), visual immersion(7 subset items), and visual interactivity(4 subset items). The proposed factors were verified by conducting a subjective evaluation in which participants were asked to evaluate a 3D virtual eyeball model based on the visual presence. The results implicated that the measurement method was suitable for evaluating the degree of the virtual realization. The proposed method is expected to reasonably measure the degree of the virtual realization.

  • PDF

Polarization-sensitive Optical Coherence Tomography Imaging of Pleural Reaction Caused by Talc in an ex vivo Rabbit Model (생체 외 토끼 모델에서의 탈크에 의해 유발되는 흉막 반응의 편광 민감 광 결맞음 단층촬영 이미징)

  • Park, Jung-Eun;Xin, Zhou;Oak, Chulho;Kim, Sungwon;Lee, Haeyoung;Park, Eun-Kee;Jung, Minjung;Kwon, Daa Young;Tang, Shuo;Ahn, Yeh-Chan
    • Korean Journal of Optics and Photonics
    • /
    • v.31 no.1
    • /
    • pp.1-6
    • /
    • 2020
  • The chest wall, an organ directly affected by environmental particles through respiration, consists of ribs, a pleural layer and intercostal muscles. To diagnose early and treat disease in this body part, it is important to visualize the details of the chest wall, but the structure of the pleural layer cannot be seen by chest computed tomography or ultrasound. On the other hand, optical coherence tomography (OCT), with a high spatial resolution, is suited to observe pleural-layer response to talc, one of the fine materials. However, intensity-based OCT is weak in providing information to distinguish the detailed structure of the chest wall, and cannot distinguish the reaction of the pleural layer from the change in the muscle by the talc. Polarization-sensitive OCT (PS-OCT) takes advantage of the fact that specific tissues like muscle, which have optical birefringence, change the backscattered light's polarization state. Moreover, the birefringence of muscle associated with the arrangement of myofilaments indicates the muscle's condition, by measuring retardation change. The PS-OCT image is interpreted from three major perspectives for talc-exposure chest-wall imaging: a thickened pleural layer, a separation between pleural layer and muscle, and a phase-retardation measurement around lesions. In this paper, a rabbit chest wall after talc pleurodesis is investigated by PS-OCT. The PS-OCT images visualize the pleural layer and muscle, respectively, and this system shows different birefringence of normal and damaged lesions. Also, an analyisis based on phase-retardation slope supports results from the PS-OCT image and histology.

A Study on Intelligent Skin Image Identification From Social media big data

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.191-203
    • /
    • 2022
  • In this paper, we developed a system that intelligently identifies skin image data from big data collected from social media Instagram and extracts standardized skin sample data for skin condition diagnosis and management. The system proposed in this paper consists of big data collection and analysis stage, skin image analysis stage, training data preparation stage, artificial neural network training stage, and skin image identification stage. In the big data collection and analysis stage, big data is collected from Instagram and image information for skin condition diagnosis and management is stored as an analysis result. In the skin image analysis stage, the evaluation and analysis results of the skin image are obtained using a traditional image processing technique. In the training data preparation stage, the training data were prepared by extracting the skin sample data from the skin image analysis result. And in the artificial neural network training stage, an artificial neural network AnnSampleSkin that intelligently predicts the skin image type using this training data was built up, and the model was completed through training. In the skin image identification step, skin samples are extracted from images collected from social media, and the image type prediction results of the trained artificial neural network AnnSampleSkin are integrated to intelligently identify the final skin image type. The skin image identification method proposed in this paper shows explain high skin image identification accuracy of about 92% or more, and can provide standardized skin sample image big data. The extracted skin sample set is expected to be used as standardized skin image data that is very efficient and useful for diagnosing and managing skin conditions.

Adaptive Search Range Decision for Accelerating GPU-based Integer-pel Motion Estimation in HEVC Encoders (HEVC 부호화기에서 GPU 기반 정수화소 움직임 추정을 고속화하기 위한 적응적인 탐색영역 결정 방법)

  • Kim, Sangmin;Lee, Dongkyu;Sim, Dong-Gyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.699-712
    • /
    • 2014
  • In this paper, we propose a new Adaptive Search Range (ASR) decision algorithm for accelerating GPU-based Integer-pel Motion Estimation (IME) of High Efficiency Video Coding (HEVC). For deciding the ASR, we classify a frame into two models using Motion Vector Differences (MVDs) then adaptively decide the search ranges of each model. In order to apply the proposed algorithm to the GPU-based ME process, starting points of the ME are decided using only temporal Motion Vectors (MVs). The CPU decides the ASR as well as the starting points and transfers them to the GPU. Then, the GPU performs the integer-pel ME. The proposed algorithm reduces the total encoding time by 37.9% with BD-rate increase of 1.1% and yields 951.2 times faster ME against the CPU-based anchor. In addition, the proposed algorithm achieves the time reduction of 57.5% in the ME running time with the negligible coding loss of 0.6%, compared with the simple GPU-based ME without ASR decision.

Analysis of Georeferencing Accuracy in 3D Building Modeling Using CAD Plans (CAD 도면을 활용한 3차원 건축물 모델링의 Georeferencing 정확도 분석)

  • Kim, Ji-Seon;Yom, Jae-Hong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.2
    • /
    • pp.117-131
    • /
    • 2007
  • Representation of building internal space is an active research area as the need for more geometrically accurate and visually realistic increases. 3 dimensional representation is common ground of research for disciplines such as computer graphics, architectural design and engineering and Geographic Information System (GIS). In many cases CAD plans are the starting point of reconstruction of 3D building models. The main objectives of building reconstruction in GIS applications are visualization and spatial analysis. Hence, CAD plans need to be preprocessed and edited to adapt to the data models of GIS SW and then georeferenced to enable spatial analysis. This study automated the preprocessing of CAD data using AutoCAD VBA (Visual Basic Application), and the processed data was topologically restructured for further analysis in GIS environment. Accuracy of georeferencing CAD data was also examined by comparing the results of coordinate transformation by using digital maps and GPS measurements as the sources of ground control points. The reconstructed buildings were then applied to visualization and network modeling.