• Title/Summary/Keyword: Web information

Search Result 11,568, Processing Time 0.036 seconds

Security of Web Applications: Threats, Vulnerabilities, and Protection Methods

  • Mohammed, Asma;Alkhathami, Jamilah;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.167-176
    • /
    • 2021
  • This is the world of computer science and innovations. In this modern era, every day new apps, webs and software are being introduced. As well as new apps and software are being introduced, similarly threats and vulnerable security matters are also increasing. Web apps are software that can be used by customers for numerous useful tasks, and because of the developer experience of good programming standards, web applications that can be used by an attacker also have multiple sides. Web applications Security is expected to protect the content of critical web and to ensure secure data transmission. Application safety must therefore be enforced across all infrastructure, including the web application itself, that supports the web applications. Many organizations currently have a type of web application protection scheme or attempt to build/develop, but the bulk of these schemes are incapable of generating value consistently and effectively, and therefore do not improve developers' attitude in building/designing stable Web applications. This article aims to analyze the attacks on the website and address security scanners of web applications to help us resolve web application security challenges.

A Study on Web Service Quality and Role of Relationship Quality of Job Information Sites (취업정보사이트의 웹서비스품질과 관계품질 역할 연구)

  • Cho, Chul-Ho
    • Journal of Korean Society for Quality Management
    • /
    • v.40 no.2
    • /
    • pp.219-230
    • /
    • 2012
  • These days, getting a gob is emerging as a hot social issue, and specialized sites offering job information are rapidly increasing. On the contrary of quantitative increase, job information sites have lots of problems with respect to satisfying customer's needs. This study is designed to explore web-site service quality factors in job information site, and relationship among characteristic web service quality, customer satisfaction, relationship quality and reuse intention. In this study we found that customer satisfaction is prior to relationship quality, which decide long-term customer relationship. And also, Trust which is one of the relationship quality and customer satisfaction affect customers reuse intent respectively. This study also found that characteristic service quality in related to job information site can be composed of four factors such as delivery of information, customization, web design and interaction. Delivery of information, web design and interaction affect trust, and web design and interaction affect customer satisfaction. And also relationship quality is prior to reuse intention.

Design of a PDA WebDAV Client Based on .NET Compact Framework (.NET Compact Framework 기반의 PDA WebDAV 클라이언트 설계)

  • Kim Dongho;Shin Wonjoon;Park Jinho;Lee Myungjoon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.583-585
    • /
    • 2005
  • WebDAV 프로토콜은 웹상의 공동 저작활동을 지원하기 위한 IETF 표준으로써, 원거리에 있는 사용자들 간에 파일을 공동 편집하고 관리할 수 있도록 해주는 HTTP 프로토콜의 확장이다. 이것은 웹상에서 가상의 작업공간을 구성함으로써, 원격 사용자들 간에 새로운 방식으로 공동작업을 가능하게 한다. 무선 네트워크 기술과 무선 장치의 발달로 이동을 하면서 WebDAV 서버에 접속한 후 협업을 수행한다면 유선 환경보다 효율적인 협업을 할 수 있을 것이다. 본 논문에서는 .NET Compact Framework 환경에서 동작하는 PDA WebDAV 클라이언트를 설계하였다. 본 클라이언트는 무선 네트워크를 지원하는 환경에서 PDA를 이용하여 WebDAV 서버와 HTTP 요청을 통하여 서버 자원을 확인할 수 있다. 이러한 PDA WebDAV 클라이언트는 PDA 환경에 맞는 인터페이스를 가져야 하고, 또한 WebDAV 명세를 따라야 한다. PDA WebDAV 클라이언트는 공간의 제약을 받지 않는 PDA를 이용함으로써 자료의 효율적인 공유 및 교환을 통하여 능동적인 협업 환경을 구축 할 수 있다.

  • PDF

Web Information Extraction and Multidimensional Analysis Using XML (XML을 이용한 웹 정보 추출 및 다차원 분석)

  • Park, Byung-Kwon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.5
    • /
    • pp.567-578
    • /
    • 2008
  • For analyzing a huge amount of web pages available in the Internet, we need to extract the encoded information in web pages. In this paper, we propose a method to extract and convert web information from web pages into XML documents for multidimensional analysis. For extracting information from web pages, we propose two languages: one for describing web information extraction rules based on the object-oriented model, and another for describing regular expressions of HTML tag patterns to search for target information. For multidimensional analysis on XML documents, we propose a method for constructing an XML warehouse and various XML cubes from it like the way we do for relational data. Finally, we show the validness of our method through the application to US patent web pages.

  • PDF

A Study on Consumer Oriented GIS : GIS 2.0 (GIS 2.0 : 소비자 참여형 GIS에 대한 고찰)

  • Kang, Ho-Seok
    • Spatial Information Research
    • /
    • v.14 no.3 s.38
    • /
    • pp.261-270
    • /
    • 2006
  • With the development and invention of computer and internet, GIS (Geographic Information System) has provided diverse services to users. Web 2.0, the next generation web, has been developed to create a new business by accumulating and structuring information from many users. The characteristics of Web 2.0 are blog, longtail distribution, SNS, DCC, collective intelligence, and so on. This study proposes a GIS 2.0 service model to maximize the introduction effect and use of GIS by the method that two-way communication and the participation of consumer, characteristics of Web 2.0, are used to build more practical GIS rather than providing spatial information in one-way and simple inquiry oriented way.

  • PDF

Improving Malicious Web Code Classification with Sequence by Machine Learning

  • Paik, Incheon
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.5
    • /
    • pp.319-324
    • /
    • 2014
  • Web applications make life more convenient. Many web applications have several kinds of user input (e.g. personal information, a user's comment of commercial goods, etc.) for the activities. On the other hand, there are a range of vulnerabilities in the input functions of Web applications. Malicious actions can be attempted using the free accessibility of many web applications. Attacks by the exploitation of these input vulnerabilities can be achieved by injecting malicious web code; it enables one to perform a variety of illegal actions, such as SQL Injection Attacks (SQLIAs) and Cross Site Scripting (XSS). These actions come down to theft, replacing personal information, or phishing. The existing solutions use a parser for the code, are limited to fixed and very small patterns, and are difficult to adapt to variations. A machine learning method can give leverage to cover a far broader range of malicious web code and is easy to adapt to variations and changes. Therefore, this paper suggests the adaptable classification of malicious web code by machine learning approaches for detecting the exploitation user inputs. The approach usually identifies the "looks-like malicious" code for real malicious code. More detailed classification using sequence information is also introduced. The precision for the "looks-like malicious code" is 99% and for the precise classification with sequence is 90%.

A Method for Efficient Structure Management and Evaluation of Website (웹사이트의 효율적인 구조 관리와 평가 방법)

  • 유대승;엄정섭;이명재
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.306-315
    • /
    • 2002
  • With the rapid growth of WWW, the existing systems are integrated into web and various web-based systems are developed. Unlike the general applications, web aplications are developed by combining the various technologies and have their own complexities. So, we have much difficulties in the development and maintenance of web applications. To accommodate to the rapidly changing business environments and user requirements, the continuos evolution is required. In this paper, we present a method for supporting the effective development and maintenance of web applications. Our method involves the extraction of web application's structure information and analyzes web log ales containing the useful information about web site. We also describe a web testing method using the attracted information and our system developed for extracting hyperlink information and analyzing web log.

  • PDF

A Comparative Study of Web Information Resources on Science & Technology (과학기술분야 웹 정보원 평가 및 비교 연구)

  • 김석영
    • Journal of Korean Library and Information Science Society
    • /
    • v.33 no.3
    • /
    • pp.133-152
    • /
    • 2002
  • The purpose of this study is to propose the criteria for evaluating web information resources. This study also attempts to get a sense of the overall quality of web information resources and to identify the relationship among the criteria, Particularly those on the science and technology. Three evaluation categories are Proposed; information content functionality or workability, and design. Core features of information content includes authority, relevancy, currency; functionality includes navigation, user support technical requirements; layout and design includes visual appearance. Based on the proposed criteria, the 50 sample web resources selected from 5 different field were evaluated. The results showed that the web information resources on the Electrical and Electronic Engineering field was excellent. The Pearson's correlation coefficient between evaluation criteria showed that information content and functionality had a negative relationship, on the other hand functionality and design had a moderate correlation.

  • PDF

Design of Web Robot Engine Using Distributed Collection Model Processing (분산수집 모델을 이용한 웹 로봇의 설계 및 구현)

  • Kim, Dae-Yu;Kim, Jung-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.1
    • /
    • pp.115-121
    • /
    • 2010
  • As internet becomes widespread, a lot of information is opened to public and users of Internet can access effectively information using web searching service. To construct web searching service, the web searching method for collecting of information is needed to obtain web page view. As a number of web page view increases, it is necessary to collect information of high quality information to be searched, therefore, a variety of web engine for searching mechanism is developed. Method of link extraction with javascript in dynamic web page and design of web searching robot are presented m this paper. To evaluate performance analyzes, we fixed one searching model with the proposed method. The searching time takes 2 minute 67 sec for 299 web pages and 12.33 sec for 10 searching model.

A Study of the Reliability of Web Services using Client Sides Errors

  • Lee, Sang-Bock;Kim, Mal-Suk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.217-221
    • /
    • 2003
  • Modeling the reliability of distributed systems requires a good understanding the reliability of the components. For thousands of web users, competitiveness in web services means a successful presence on the web. Failure rates for the presence of a web site are considered on client sides errors using RFC2068. Data were collected from some host via the internet.

  • PDF