• Title/Summary/Keyword: web 2.0map

Search Result 41, Processing Time 0.027 seconds

The Creation of Dental Radiology Multimedia Electronic Textbook (멀티미디어기술을 이용한 치과방사선학 전자 교과서 제작에 관한 연구)

  • Kim Eun-Kyung;Cha Sang-Yun;Han Won-Jeong;Hong Byeong-Hee
    • Imaging Science in Dentistry
    • /
    • v.30 no.1
    • /
    • pp.55-62
    • /
    • 2000
  • Purpose: This study was performed to develop the electronic textbook (CD-rom title) about preclinical practice of oral and maxillofacial radiology, using multimedia technology with interactive environment. Materials and Methods: After comparing the three authoring methods of multimedia, i.e. programming language, multimedia authoring tool and web authoring tool, we determined the web authoring tool as an authoring method of our electronic textbook. Intel Pentium II 350 MHz IBM-compatible personal computer with 128 Megabyte RAM, Umax Powerlook flatbed scanner with transparency unit, Olympus Camedia l400L digital camera, ESS 1686 sound card, Sony 8 mm Handycam, PC Vision 97 pro capture board, Namo web editor 3.0, Photoshop 3.0, ThumbNailer, RealPlayer 7 basic and RealProducer G2 were used for creating the text document, diagram, figure, X-ray image, video and sound files. We made use of javascripts for tree menu structure, moving text bar, link button and spread list menu and image map etc. After creating all files and hyperlinking them, we burned out the CD-rom title with all of the above multimedia data, Netscape communicator and plug in program as a prototype. Results and Conclusions : We developed the dental radiology electronic textbook which has 9 chapters and consists of 155 text documents, 26 figures, 150 X-ray image files, 20 video files, 20 sound files and 50 questions with answers. We expect that this CD-rom title can be used at the intranet and internet environments and continuous updates will be performed easily.

  • PDF

A Study on Designing of Metadata for Constructing the Library Map Information System (도서관지도정보시스템 구축을 위한 메타데이터 개발 연구)

  • Noh, Young-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.3
    • /
    • pp.241-264
    • /
    • 2010
  • This study aimed to construct the Library Map Information System(LMIS) based on the Wiki theory of Web 2.0. We built this system because there was no collective source of information about every library in the world. Also, this system was developed to provide a library location information service by mashing-up with the Google Map. Through this study, the metadata applied to the newly constructed system was developed by using the Delphi method. A total of 13 experts including librarians of schools, public, academic, special, and national libraries as well as LIS faculty members and researchers, were commissioned as Delphi experts. Through three rounds of a Delphi survey analysis, the addition, modification, and deletion of the initial metadata elements was accomplished, and then the library contact/location information, library information, collection information, and event information was proposed. The metadata for LMIS was organized into four sectors and then 49 elements, each assigned to a sector.

k-Interest Places Search Algorithm for Location Search Map Service (위치 검색 지도 서비스를 위한 k관심지역 검색 기법)

  • Cho, Sunghwan;Lee, Gyoungju;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.259-267
    • /
    • 2013
  • GIS-based web map service is all the more accessible to the public. Among others, location query services are most frequently utilized, which are currently restricted to only one keyword search. Although there increases the demand for the service for querying multiple keywords corresponding to sequential activities(banking, having lunch, watching movie, and other activities) in various locations POI, such service is yet to be provided. The objective of the paper is to develop the k-IPS algorithm for quickly and accurately querying multiple POIs that internet users input and locating the search outcomes on a web map. The algorithm is developed by utilizing hierarchical tree structure of $R^*$-tree indexing technique to produce overlapped geometric regions. By using recursive $R^*$-tree index based spatial join process, the performance of the current spatial join operation was improved. The performance of the algorithm is tested by applying 2, 3, and 4 multiple POIs for spatial query selected from 159 keyword set. About 90% of the test outcomes are produced within 0.1 second. The algorithm proposed in this paper is expected to be utilized for providing a variety of location-based query services, of which demand increases to conveniently support for citizens' daily activities.

Water-well Management Data Modeling using UML 2.0 based in u-GIS Environment (u-GIS 환경에서 UML 2.0을 활용한 지하수 관리 데이터 모델링)

  • Jung, Se-Hoon;Kim, Kyung-Jong;Sim, Chun-Bo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.4
    • /
    • pp.523-531
    • /
    • 2011
  • Many of the wells which were constructed to use ground water resource are abandoned and not managed efficiently after its use. And a variety of heavy metals and organic compounds are released from the abandoned wells and this can cause ground water pollution. Therefore in this paper implemented to monitor locational information drill holes and underground water sensing information on real time basis using u-GIS environment to combined ubiquitous sensor node and GIS technology to improve these problems. In addition, this system suggests using system by UML 2.0 by analyzing variety requirement of user and between system internal modules interaction and data flow. It provides graphical user interfaces (GUI) to system users to monitor water-well related property information and its managements for each water-well at remote site by variety platform by GIS map and web environment and mobile device based on smart phone.

Design and Implementation of Distributed QoS Management Architecture for Real-time Negotiation and Adaptation Control on CORBA Environments (CORBA 환경에서 실시간 협약 및 작응 제어를 위한 분사 QoS 관리 구조의 설계 및 구현)

  • Lee, Won-Jung;Shin, Chang-Sun;Jeong, Chang-Won;Joo, Su-Chong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.1C
    • /
    • pp.21-35
    • /
    • 2002
  • Nowadays, in accordance with increasing expectations of multimedia stream service on the internet, a lot of distributed applications are being required and developed. But the models of the existing systems have the problems that cannot support the extensibility and the reusability, when the QoS relating functions are being developed as an integrated modules which are suited on the centralized controlled specific-purpose application services. To cope with these problems, it is suggested in this paper to a distributed QoS management system on CORBA, an object-oriented middleware compliance. This systems we suggested can provides not only for efficient control of resources, various service QoS, and QoS control functions as the existing functions, but also QoS control real-time negotiation and dynamic adaptation in addition. This system consists of QoS Control Management Module(QoS CMM) in client side and QoS Management Module(QoS MM) in server side, respectively. These distributed modules are interfacing with each other via CORBA on different systems for distributed QoS management while serving distributed streaming applications. In phase of design of our system, we use UML(Unified Modeling Language) for designing each component in modules, their method calls and various detailed functions for controlling QoS of stream services. For implementation of our system, we used OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java Media Framework API 2.0 beta2, Mini-SQL 1.0.16 and the multimedia equipments, such as SunVideoPlus/Sun Video capture board and Sun Camera. Finally, we showed a numerical data controlled by real-time negotiation and adaptation procedures based on QoS map information to GUIs on client and server dynamically, while our distributed QoS management system is executing a given streaming service.

GIS Database and Google Map of the Population at Risk of Cholangiocarcinoma in Mueang Yang District, Nakhon Ratchasima Province of Thailand

  • Kaewpitoon, Soraya J;Rujirakul, Ratana;Joosiri, Apinya;Jantakate, Sirinun;Sangkudloa, Amnat;Kaewthani, Sarochinee;Chimplee, Kanokporn;Khemplila, Kritsakorn;Kaewpitoon, Natthawut
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.3
    • /
    • pp.1293-1297
    • /
    • 2016
  • Cholangiocarcinoma (CCA) is a serious problem in Thailand, particularly in the northeastern and northern regions. Database of population at risk are need required for monitoring, surveillance, home health care, and home visit. Therefore, this study aimed to develop a geographic information system (GIS) database and Google map of the population at risk of CCA in Mueang Yang district, Nakhon Ratchasima province, northeastern Thailand during June to October 2015. Populations at risk were screened using the Korat CCA verbal screening test (KCVST). Software included Microsoft Excel, ArcGIS, and Google Maps. The secondary data included the point of villages, sub-district boundaries, district boundaries, point of hospital in Mueang Yang district, used for created the spatial databese. The populations at risk for CCA and opisthorchiasis were used to create an arttribute database. Data were tranfered to WGS84 UTM ZONE 48. After the conversion, all of the data were imported into Google Earth using online web pages www.earthpoint.us. Some 222 from a 4,800 population at risk for CCA constituted a high risk group. Geo-visual display available at following www.google.com/maps/d/u/0/edit?mid=zPxtcHv_iDLo.kvPpxl5mAs90&hl=th. Geo-visual display 5 layers including: layer 1, village location and number of the population at risk for CCA; layer 2, sub-district health promotion hospital in Mueang Yang district and number of opisthorchiasis; layer 3, sub-district district and the number of population at risk for CCA; layer 4, district hospital and the number of population at risk for CCA and number of opisthorchiasis; and layer 5, district and the number of population at risk for CCA and number of opisthorchiasis. This GIS database and Google map production process is suitable for further monitoring, surveillance, and home health care for CCA sufferers.

Assessment of Accuracy of SRTM (SRTM(Shuttle Radar Topography Mission)의 정확성 평가)

  • Yoo, Seung-Hwan;Nam, Won-Ho;Choi, Jin-Yong
    • KCID journal
    • /
    • v.14 no.1
    • /
    • pp.80-88
    • /
    • 2007
  • The Shuttle Radar Topography Mission (SRTM) obtained elevation data on a near-global scale to generate the most complete high-resolution digital topographic database of Earth. SRTM consisted of a specially modified radar system that flew onboard the Space shuttle SRTM consisted of a specially modified radar system that flew onboard the Space Shuttle Endeavour during an 11-day mission on February 2000. Since 2004, in a GLCF (Global Land Cover Facility, http;//glcf.umiacs.umd.edu/) web-site, products of SRTM including 1Km and 90m resolutions for outside US and a 30m resolution for the US have been provided. This study is to assess the accuracy of SRTM-DEM in comparing with NGIS-DEM generated from NGIS digital topographic map(1:25,000) in Guem river watershed. For the Geum river watershed, SREM-DEM elevation ranged from 0 to 1,605m and NGIS-DEM ranged from 6 to 1,610m, and the average elevation of SRTM-DEM was 226.7m and 218.9m for NGIS-DEM, respectively. NGIS-DEM was subtracted from SRTM in three zones -Zone I (0~100m), Zone II (100~400m), Zone III (over 400m)- to estimate difference between SRTM and NGIS-DEM. As the results, the differences of these DEM were 5.2m (11.6%) in Zone I, 8.8m (3.8%) in Zone II, 12.5m (2.1%) in Zone III. Although there were differences between SRTM-DEM and NGIS-DEM, SREM-DEM would be possible to be utilized as DEM data for the region where DEM is not prepared.

  • PDF

Linkage Map and Quantitative Trait Loci(QTL) on Pig Chromosome 6 (돼지 염색체 6번의 연관지도 및 양적형질 유전자좌위 탐색)

  • Lee, H.Y.;Choi, B.H.;Kim, T.H.;Park, E.W.;Yoon, D.H.;Lee, H.K.;Jeon, G.J.;Cheong, I.C.;Hong, K.C.
    • Journal of Animal Science and Technology
    • /
    • v.45 no.6
    • /
    • pp.939-948
    • /
    • 2003
  • The objective of this study was to identify the quantitative traits loci(QTL) for economically important traits such as growth, carcass and meat quality on pig chromosome 6. A three generation resource population was constructed from cross between Korean native boars and Landrace sows. A total of 240 F$_2$ animals were produced using intercross between 10 boars and 31 sows of F$_1$ animals. Phenotypic data including body weight at 3 weeks, backfat thickness, muscle pH, shear force and crude protein level were collected from F$_2$ animals. Animals including grandparents(F$_0$), parents(F$_1$) and offspring(F$_2$) were genotyped for 29 microsatellite markers and PCR-RFLP marker on chromosome 6. The linkage analysis was performed using CRI-MAP software version 2.4(Green et al., 1990) with FIXED option to obtain the map distances. The total length of SSC6 linkage map estimated in this study was 169.3cM. The average distance between adjacent markers was 6.05cM. For mapping of QTL, we used F$_2$ QTL Analysis Servlet of QTL express, a web-based QTL mapping tool(http://qtl.cap.ed.ac.uk). Five QTLs were detected at 5% chromosome-wide level for body weight of 3 weeks of age, shear force, meat pH at 24 hours after slaughtering, backfat thickness and crude protein level on SSC6.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Q Analyses of the Structure of Internet Bookstore Users' Subjectivity (인터넷 서점 이용자의 주관성에 관한 Q분석)

  • Jung Huyn-Wook;Kang Hye-Young;Kim Sun-nam
    • Journal of Korean Library and Information Science Society
    • /
    • v.36 no.2
    • /
    • pp.197-220
    • /
    • 2005
  • This paper examined the structure of internet bookstore users' subjectivity by focusing on their beliefs, values and attitudes.0 methodology was utilized for the study. After constituting 36 Q sample and 28 P sample, data were collect from April 15, 2005 to April 22 2005. The analyses showed 3 types of subjectivity structures. The first one was 'the economic benefit-seeking type.' Those in this type were motivated to use internet bookstores to achieve economic benefits. They paid more attention to the Prices discounted than the web site contents provided by internet bookstores. This type was conspicuously found among college students. The second one was the 'information-seeking type.' People in this category made visits to internet bookstores in order to obtain new information or professional materials. This type was dominantly found among women. The third one was the 'convenience-seeking type.' Those in this type were concerned not only with the accessibility and convenience, but also with such practical issues as delivery, price, applicability, payment, and bonus. This type was conspicuously observed among white collar workers. These findings suggests that in order to make internet bookstores more attractive to users, it is demanded to understand various needs held by users and map out sophisticated marketing strategies on the basis of such a knowledge.

  • PDF