• Title/Summary/Keyword: 웹 구조

Search Result 1,928, Processing Time 0.022 seconds

A Smart Mobile Mail System Based on MPEG21-DIDL for Any Mobile Device (모든 모바일 단말기에 서비스 가능한 MPEG21-DIDL 기반의 스마트 모바일 메일 시스템)

  • Zhao, Mei-Hua;Seo, Chang-Wo;Lim, Young-Hwan
    • Journal of Internet Computing and Services
    • /
    • v.11 no.3
    • /
    • pp.1-13
    • /
    • 2010
  • As the computing power of the mobile devices is improving rapidly, many kinds of web services are also available in mobile devices just as Email service. Mobile Mail Service began early, but this service is mostly limited in some specified mobile devices such as Smart Phone. That is a limitation that users have to purchase specified phone to be benefited from Mobile Mail Service. In this thesis, it developed new kind of Mobile Mail System named Smart Mobile Mail System based MPEG21-DIDL Markup, and solved above problem. DIDL could be converted to other Markup types which are displayed in mobile devices by Mobile Gate Server. By transforming PC Web Mail contents including attachment document to DIDL Markup through Mobile Gate Server, the Mobile Mail Service could be available for all kinds of mobile device. The Smart Mobile Mail System also performs real time alarming service for new Email using Callback URL SMS. When there is new Email arriving, the Mail System sends a Call back URL SMS to user. User could directly check Email through Callback URL SMS in real time.

Manual of River Corridor Survey and Monitoring for Nature-Friendly River Management (자연 친화적 하천관리를 위한 수변조사 및 모니터링 매뉴얼)

  • Ock Giyoung;Woo Hyoseop;Kim Kyuho;Cho Kanghyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2005.05b
    • /
    • pp.1269-1273
    • /
    • 2005
  • 자연 친화적 하천관리는 현 치수위주의 하천정비 관행으로 인한 하천의 환경 기능의 훼손을 막고, 제도적으로 자연 친화적으로 하천 사업을 추진하기 위해서 수변조사, 계획, 설계, 시공, 모니터링, 그리고 유지관리 등 일련의 체계적인 절차에 따라 하천 사업을 시행하는 것이다. 특히 이러한 표준 절차 가운데 수변조사는 하천의 특성을 결정짓는 여러 가지 자연적, 인공적 형성과정들과 그 과정들을 지배하는 생태학적 원리들을 파악하여 하천사업의 정비주제와 방향을 설정하는데 도움을 준다. 모니터링은 계획과 설계, 시공을 통해 실시된 하천사업의 효과를 평가하고, 공법적용에 따른 하천의 변화과정을 파악하여 유지관리 및 적응관리를 위한 근거를 제시하는 과정이다. 수변조사 및 모니터링 매뉴얼은 건설교통부가 수행한 '자연친화적 하천정비기법 개발' 의 연구성과로서, 경기도의 탄천, 충청북도의 달천 그리고 경기도 오산천을 대상으로 각 관련분야의 전문가가 참여하여 직접 적용$\cdot$검증한 결과를 바탕으로 제작한 것이다. 이는 하천의 관리 및 하천관련 사업을 자연 친화적으로 수행하기 위하여 필요한 '수변조사'와 '모니터링'에 대한 일반적인 절차와 방법을 체계화 한 것이다. 특히 수변조사 매뉴얼의 경우는 $\ulcorner$하천설계기준$\lrcorner$의 '제12장 하천환경조사'를 보완하는 관계에 있다. 하천설계기준에서 제시한 하천환경조사는 본 수변조사 매뉴얼상과 동일한 절차와 양식을 따르고 있다. 하천설계기준에서 자세하게 기술하지 못한 조사방법, 정리, 분석, 평가에 대한 내용을 구체적으로 적용하여 이를 보완하고 있다.은 안양천 웹페이지(http://anyang.river.or.kr)에서 구현되고 있으며, 앞서 설명한 바와 같이 1단계 프론티어 사업으로 설치된 4개의 하천수위, 2개의 지하수위 관측시설과 함께, 2단계에 (주)웹솔루스에서 자체적으로 설치 운영하고 있는 2개소의 하수관거 모니터링 관측시설, 그리고 안양시에서 운영하고 있는 5개소의 강우관측소와 7개소의 수위관측소를 모두 통합하여 실시간 자료를 제공하고 있다. 수위자료는 10분단위의 텍스트정보와 그래프형태로 지원되며, 검색기간 설정을 통해 원하는 기간내의 자료를 선별, 검색할 수 있다.. 또한 이와 같은 기초적인 정보를 바탕으로 하류하천의 탁수 피해를 최소화할 수 있는 선택취수탑의 운영방안을 수립할 수 있다 본 연구에서는 이를 위해 선택취수탑 주위의 성층흐름을 기존의 실험자료와 수치해석을 통하여 분석하였고, 온도성층구조나 취수구의 위치변화에 따른 방류수 수질특성을 조사하였다.쇄파대(artifical reef)와 같은 완충지대를 갖는 호안을 축조함으로써 월파량을 감소시키는 대안으로 제시하고자 한다. 본 연구 수행을 통해 태풍 내습시 발생 가능한 자연재해에 대한 사전 방지를 목적으로 태풍피해의 원인을 제시하고 이를 해결하여 현재의 방재대책이 항구적인 방재대책으로 전환될 수 있는 방안 마련의 기초 자료로 활용되기를 기대한다., L-arabinose, 및 D-galactose; 제3차(第三次) 가수분해물(加水分解物)(C)에서 L-rhamnose, D-xylose, L-arabinose 및 D-galactose, 비가수분해물(非加水分解物)(C')에서 D-xylose와 D-galactose를 검출(檢出)하였다. (4) 구성당(構

  • PDF

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Using Google Earth for a Dynamic Display of Future Climate Change and Its Potential Impacts in the Korean Peninsula (한반도 기후변화의 시각적 표현을 위한 Google Earth 활용)

  • Yoon, Kyung-Dahm;Chung, U-Ran;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.4
    • /
    • pp.275-278
    • /
    • 2006
  • Google Earth enables people to easily find information linked to geographical locations. Google Earth consists of a collection of zoomable satellite images laid over a 3-D Earth model and any geographically referenced information can be uploaded to the Web and then downloaded directly into Google Earth. This can be achieved by encoding in Google's open file format, KML (Keyhole Markup Language), where it is visible as a new layer superimposed on the satellite images. We used KML to create and share fine resolution gridded temperature data projected to 3 climatological normal years between 2011-2100 to visualize the site-specific warming and the resultant earlier blooming of spring flowers over the Korean Peninsula. Gridded temperature and phonology data were initially prepared in ArcGIS GRID format and converted to image files (.png), which can be loaded as new layers on Google Earth. We used a high resolution LCD monitor with a 2,560 by 1,600 resolution driven by a dual link DVI card to facilitate visual effects during the demonstration.

A Study on the meaning of Database follow the application of Visual Contents (전시콘텐츠 적용 환경에 따른 데이터베이스 의미 고찰)

  • Kim, Min-Su;Yoon, Se-Kyun
    • Archives of design research
    • /
    • v.18 no.1 s.59
    • /
    • pp.17-26
    • /
    • 2005
  • Nowadays, display-contents are developing to an informative environment. that is under the logic of the media operating system. To perceive the media-environments and produce the cultural contents, the cultural designers seek to understand a skin structure from making up for shape. To appreciate operating system in data and database is not only systematization of form and contents of visual contents but also variety contents into multiple-platform and integrative environments. These days, the spectacle exhibition try to express for their surface design between algorithm of data and database. the information is expressing aesthetic which means presents the integrated contents through the play instinct environment to end-user. That was given web or game to participation is developing with the cellular device and ubiquitous computing system. in the linear perspective, the end-user should be immerse more and more hyper-simulation system because of the operating algorithm of database. To do this, human have need to get the information-ability from multi-platform society. In the virtual environment, database offer the experience of an unheard-of event to end-user that prepare the participants the circumstances priority of signifiers. To do that already based on a fixed sensibility endow with narrative of the freshness- experience.

  • PDF

Improvement of Partial Update for the Web Map Tile Service (실시간 타일 지도 서비스를 위한 타일이미지 갱신 향상 기법)

  • Cho, Sunghwan;Ga, Chillo;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.5
    • /
    • pp.365-373
    • /
    • 2013
  • Tile caching technology is a commonly used method that optimizes the delivery of map imagery across the internet in modern WebGIS systems. However the poor performance of the map tile cache update is one of the major causes that hamper the wider use of this technique for datasets with frequent updates. In this paper, we introduce a new algorithm, namely, Partial Area Cache Update (PACU) that significantly minimizes redundant update of map tiles where the update frequency of source map data is very large. The performance of our algorithm is verified with the cadastral map data of Pyeongtaek of Gyeonggi Province, where approximately 3,100 changes occur in a day among the 331,594 parcels. The experiment results show that the performance of the PACU algorithm is 6.6 times faster than the ESRI ArcGIS SERVER$^{(r)}$. This algorithm significantly contributes in solving the frequent update problem and enable Web Map Tile Services for data that requires frequent update.

XML View Indexing Using an RDBMS based XML Storage System (관계 DBMS 기반 XML 저장시스템 상에서의 XML 뷰 인덱싱)

  • Park Dae-Sung;Kim Young-Sung;Kang Hyunchul
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.59-73
    • /
    • 2005
  • Caching query results and reusing them in processing of subsequent queries is an important query optimization technique. Materialized view and view indexing are the representative examples of such a technique. The two schemes had received much attention for relational databases, and have been investigated for XML data since XML emerged as the standard for data exchange on the Web. In XML view indexing, XML view xv which is the result of an XML query is represented as an XML view index(XVI), a structure containing the identifiers of xv's underlying XML elements as well as the information on xv. Since XVI for xv stores just the identifiers of the XML elements not the elements themselves, when xv is requested, its XVI should be materialized against xv's underlying XML documents. In this paper, we address the problem of integrating an XML view index management system with an RDBMS based XML storage system. The proposed system was implemented in Java on Windows 2000 Server with each of two different commercial RDBMSs, and used in evaluating performance improvement through XML view indexing as well as its overheads. The experimental results revealed that XML view indexing was very effective with an RDBMS based XML storage system while its overhead was negligible.

  • PDF

A Case Study on SK Telecom's Next Generation Marketing System Development (SK텔레콤의 차세대 마케팅 시스템 개발사례 연구)

  • Lee, Sang-Goo;Jang, Si-Young;Yang, Jung-Yeon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.158-170
    • /
    • 2008
  • In response to the changing demands of ever competitive market, SK Telecom has built a new marketing system that can support dynamic marketing campaigns and, at the same time, scale up to the large volumes of data and transactions for the next decade. The system which employs Unix-based client-server (using Web browser interfaces) architecture will replace the current mainframe-based COIS system. The project, named NGM (Next Generation Marketing ), is unprecedentedly large in scale. However, both managerial and technical problems led the project into a crisis. The application framework that depended on a software solution from a major global vendor could not support the dynamic functionalities required for the new system. In March 2005, SK telecom declared the suspension of the NGM project. The second phase of the project started in May 2005 following a comprehensive replanning. It was decided that no single existing solution could cope with the complexity of the new system and hence the new system would be custom-built. As such. a number of technical challenges emerged. In this paper, we report on the three key dimensions of technical challenges - middleware and application framework, database architecture and tuning, and system performance. The processes and approaches, adopted in building NGM system, may be viewed as "best practices" in the telecom industry. The completed NGM system, now called "U.key System," successfully came into operation on the ninth of October, 2006. This new infrastructure is expected to give birth to a series of innovative, fruitful, and customer-oriented applications in the near future.

Roles of Buyer's Trust and Distrust in Open Markets: Focusing on Transfer between Intermediary and Seller (오픈마켓에서 구매자의 신뢰와 불신의 역할: 중개업체와 판매자간 전이를 중심으로)

  • Lee, Suk-Joo;Choi, Seulbi;Ahn, Hyunchul
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.2
    • /
    • pp.360-374
    • /
    • 2017
  • This study investigates the effects of trust and distrust on intention to purchase in open markets, based on the ideas derived from previous studies such as coexistence of trust and distrust, and two distinct trustees in open markets-intermediary and sellers. Specifically, this study i) proposes a trust-distrust model of intermediary and sellers, ii) explores the transfer of trust and distrust from intermediary to sellers, and iii) discovers the antecedents of trust and distrust. The empirical validation using Partial Least Squares shows three results as follows. First, trust in intermediary positively affects intention to purchase through the mediated impact of trust in sellers. That is, trust in intermediary transfers to trust in sellers. Second, distrust in intermediary negatively affects intention to purchase through the mediated impact of customers' perceived risk. Third, structural assurance and perceived website quality positively affect trust in intermediary. The results of this research have implications for intermediary firms not only to build trust but also to manage distrust level. However, this study could not identify any antecedent of distrust, so further research for these antecedents will be needed in the future.