• Title/Summary/Keyword: Web-Linked Information

Search Result 184, Processing Time 0.029 seconds

Incorporation of Media in the Activities of Scientific Library of Higher Education Institution

  • Horban, Yurii;Berezhna, Oksana;Bohush, Iryna;Doroshenko, Yevhenii;Kovbel, Viktoriia
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.59-66
    • /
    • 2022
  • Students can successfully connect with one another thanks to the introduction of Web 2.0 and the tools and technology linked with it. The fact that rising digital tools are systematically influencing the education system is not a secret. The purpose of the research article efficiently evaluates the influence of incorporation of media in the activities of the scientific library of the higher education institution. The research Methodology is the Concepts, techniques, and procedures to effectively inculcate primary and secondary data to conduct the research effortlessly. It's worth noting that in this case, quantitative primary research was provided in the form of a survey. The researchers have proposed a survey in order to successfully instil a comprehensive view on the "incorporation of media in the operations of the scientific library of higher education institutions." As a result, fifty-one higher education institution principals were asked to attend this session. This is necessary to understand that they are both well-educated and cognizant of the impact of technology innovation on schooling. As a result, the researchers were able to gain a comprehensive view of this situation thanks to this survey. The results effectively showed that most of the participants believe that social media plays a vital role in shaping up higher education and at the same time they believe that the libraries of famous educational institutions must adapt as per the new educational trend so that teachers and students both can tap into its benefit.The practical significance of the result is manoeuvred by the efficient survey analysis and at the same time, peer-reviewed journals have been employed to put forward authentic information. Therefore, efficient insight regarding this topic has been gathered by the researchers.

Study on the Application of Big Data Mining to Activate Physical Distribution Cooperation : Focusing AHP Technique (물류공동화 활성화를 위한 빅데이터 마이닝 적용 연구 : AHP 기법을 중심으로)

  • Young-Hyun Pak;Jae-Ho Lee;Kyeong-Woo Kim
    • Korea Trade Review
    • /
    • v.46 no.5
    • /
    • pp.65-81
    • /
    • 2021
  • The technological development in the era of the 4th industrial revolution is changing the paradigm of various industries. Various technologies such as big data, cloud, artificial intelligence, virtual reality, and the Internet of Things are used, creating synergy effects with existing industries, creating radical development and value creation. Among them, the logistics sector has been greatly influenced by quantitative data from the past and has been continuously accumulating and managing data, so it is highly likely to be linked with big data analysis and has a high utilization effect. The modern advanced technology has developed together with the data mining technology to discover hidden patterns and new correlations in such big data, and through this, meaningful results are being derived. Therefore, data mining occupies an important part in big data analysis, and this study tried to analyze data mining techniques that can contribute to the logistics field and common logistics using these data mining technologies. Therefore, by using the AHP technique, it was attempted to derive priorities for each type of efficient data mining for logisticalization, and R program and R Studio were used as tools to analyze this. Criteria of AHP method set association analysis, cluster analysis, decision tree method, artificial neural network method, web mining, and opinion mining. For the alternatives, common transport and delivery, common logistics center, common logistics information system, and common logistics partnership were set as factors.

A Development of Facility Web Program for Small and Medium-Sized PSM Workplaces (중·소규모 공정안전관리 사업장의 웹 전산시스템 개발)

  • Kim, Young Suk;Park, Dal Jae
    • Korean Chemical Engineering Research
    • /
    • v.60 no.3
    • /
    • pp.334-346
    • /
    • 2022
  • There is a lack of knowledge and information on the understanding and application of the Process Safety Management (PSM) system, recognized as a major cause of industrial accidents in small-and medium-sized workplaces. Hence, it is necessary to prepare a protocol to secure the practical and continuous levels of implementation for PSM and eliminate human errors through tracking management. However, insufficient research has been conducted on this. Therefore, this study investigated and analyzed the various violations in the administrative measures, based on the regulations announced by the Ministry of Employment and Labor, in approximately 200 small-and medium-sized PSM workplaces with fewer than 300 employees across in korea. This study intended to contribute to the prevention of major industrial accidents by developing a facility maintenance web program that removed human errors in small-and medium-sized workplaces. The major results are summarized as follows. First, It accessed the web via a QR code on a smart device to check the equipment's specification search function, cause of failure, and photos for the convenience of accessing the program, which made it possible to make requests for the it inspection and maintenance in real time. Second, it linked the identification of the targets to be changed, risk assessment, worker training, and pre-operation inspection with the program, which allowed the administrator to track all the procedures from start to finish. Third, it made it possible to predict the life of the equipment and verify its reliability based on the data accumulated through the registration of the pictures for improvements, repairs, time required, cost, etc. after the work was completed. It is suggested that these research results will be helpful in the practical and systematic operation of small-and medium-sized PSM workplaces. In addition, it can be utilized in a useful manner for the development and dissemination of a facility maintenance web program when establishing future smart factories in small-and medium-sized PSM workplaces under the direction of the government.

Structural Assets of Local Broadcasting Networks and Regional Gap: Foucsing on Local MBC stations in South Korea (지역 방송국 네트워크의 구조적 자산(asset)과 지역 간 격차: 지역MBC를 중심으로)

  • Son, Ji-Hoon;Lee, Jung-Min;Kim, Jae-Hun;Park, Han-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.9
    • /
    • pp.194-204
    • /
    • 2022
  • This study examined the social capital and geographical gaps of local television stations using web data gathered through website crawling. URLs for 16 local MBC websites were collected. MBC is an abbreviation for Munhwa Broadcasting Corporation, one of South Korea's largest television and radio broadcasters. Munhwa is a Sino-Korean term that means "culture." It initially determined which institutions local broadcasting stations were linked to using a Web Impact Report. To investigate the specific connection type, URL information was classified using the n-tuple helix model, followed by 2-mode network analysis. The n-tuple helix model is an analysis method that extends the standard university-business-government triple-helix model by including a new network innovation originator. As a result, local broadcasting stations relied heavily on activities like as festivals, performances, and exhibitions to engage the local community. Local stations in Daegu-Gyeongbuk area and the Busan-Ulsan-Gyeongnam area were identified as having the most diverse connections to the local population among other regions.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Study on the Records Management for Evidence-Based Accountability of Corporations : Focusing on Sustainability Reports (기업의 증거기반 설명책임을 위한 기록관리 방안 '지속가능성보고서'를 중심으로)

  • Jung, Mi Ri;Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.48
    • /
    • pp.45-92
    • /
    • 2016
  • Corporations report their economical, environmental, social influences and achievements through sustainability reports. Apart from the financial reports, which are subject to legal restrictions, sustainability reports inform non financial achievements of a corporation, thus the reliability of the information is solely dependent on the corporation itself. The current sustainability reports are of types that cannot include proof or source of the index data, thus they are tended to be regarded as means of publicity. The reliability of the reports is often questioned. This research applied the concept of Evidence-Based Accountability, which will allow the confirmation of accountability through records including contents and context of the tasks. Evidence-Based Accountability means producing and accumulating witness records of actions, then managing the records as usable information and use them as accountability information. Index data from sustainability reports of domestic corporations and web based reports of Vodafone was reviewed. Measures to link task records as proof of index data was studied. To make this possible, record production and acquisition system was redesigned in order to secure required records as evidence. Linked build-up of SR system and RMS was proposed. The proposed system will allow collection and management of records as SR accountability information, and provide the data when necessary. Also, corporate infrastructure was proposed. This infrastructure will build a professional records management system in stages, through organizational system and regulations. Cooperation of staff in this infrastructure will support reliable corporate accountability.

Using Google Earth for a Dynamic Display of Future Climate Change and Its Potential Impacts in the Korean Peninsula (한반도 기후변화의 시각적 표현을 위한 Google Earth 활용)

  • Yoon, Kyung-Dahm;Chung, U-Ran;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.4
    • /
    • pp.275-278
    • /
    • 2006
  • Google Earth enables people to easily find information linked to geographical locations. Google Earth consists of a collection of zoomable satellite images laid over a 3-D Earth model and any geographically referenced information can be uploaded to the Web and then downloaded directly into Google Earth. This can be achieved by encoding in Google's open file format, KML (Keyhole Markup Language), where it is visible as a new layer superimposed on the satellite images. We used KML to create and share fine resolution gridded temperature data projected to 3 climatological normal years between 2011-2100 to visualize the site-specific warming and the resultant earlier blooming of spring flowers over the Korean Peninsula. Gridded temperature and phonology data were initially prepared in ArcGIS GRID format and converted to image files (.png), which can be loaded as new layers on Google Earth. We used a high resolution LCD monitor with a 2,560 by 1,600 resolution driven by a dual link DVI card to facilitate visual effects during the demonstration.

A Study on the Effects of Overseas Direct Purchase Content Attributes and Logistics Attributes on Consumer's Perceived Value and Purchase Intention (해외직구 콘텐츠 속성과 물류 속성이 소비자의 지각된 가치 및 구매의도에 미치는 영향)

  • Park, Soo-Bin;Hyun, Jung-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.6
    • /
    • pp.666-679
    • /
    • 2022
  • With the development of information and communication technology, borders are removed and consumer needs are diversified, so a new consumption form called overseas direct purchase has emerged and has been growing over the past few years. In this study, for the most common purchasing agency service among various types of overseas direct purchase, an empirical study was conducted on the effects of the content and logistics attributes of the overseas direct purchase platform on the perceived value and purchase intention of consumers. Data were collected from 273 domestic male and female adult consumers who had experience in information search related to overseas direct purchase, and the results of statistical analysis were summarized as follows. First, among the attributes of overseas direct purchase content and logistics attributes, only the attractiveness of content had a significant effect on the perceived value of consumers. Second, all perceived values of consumers were linked to purchase intentions, but among them, sensory values had a greater influence. Through these research results, it is suggested to increase the attractiveness of the web/app contents of overseas direct purchase service agents, and to improve the quality of services that can arouse sensory consumption experiences to meet the changing needs of consumers.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.