• Title/Summary/Keyword: Big Y development

Search Result 1,583, Processing Time 0.025 seconds

A Study on the Analysis of Regional Tourism in Uijeongbu Using Big Data (빅 데이터를 활용한 의정부 지역 관광 분석 연구)

  • Lee, Jong-Yong;Jung, Kye-Dong;Ryu, Ki-hwan;Park, SeaYoung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.1
    • /
    • pp.413-418
    • /
    • 2020
  • The travel pattern of tourists for the development of the tourist course is designed to collect and analyze tourist information based on the big data of the carrier to improve the quality of the tourist course. In particular, the analyzed data is used to derive empirical data that can estimate the effect of tourists' inflow into tourism, and to utilize the information as basic data for the development of the tourist course. In addition, the travel pattern of tourists for the development of regional tourism courses is to collect and analyze information on the route and duration of tourists' travel based on big data collected by telecom operators, credit card companies and other data to improve the quality of tourist course development, and to derive empirical data to estimate the effect of tourist inflow through the analyzed data, based on the characteristics of the tourism course and the data needed for the development of new tourist courses in the future.

A Context-Awareness Modeling User Profile Construction Method for Personalized Information Retrieval System

  • Kim, Jee Hyun;Gao, Qian;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.122-129
    • /
    • 2014
  • Effective information gathering and retrieval of the most relevant web documents on the topic of interest is difficult due to the large amount of information that exists in various formats. Current information gathering and retrieval techniques are unable to exploit semantic knowledge within documents in the "big data" environment; therefore, they cannot provide precise answers to specific questions. Existing commercial big data analytic platforms are restricted to a single data type; moreover, different big data analytic platforms are effective at processing different data types. Therefore, the development of a common big data platform that is suitable for efficiently processing various data types is needed. Furthermore, users often possess more than one intelligent device. It is therefore important to find an efficient preference profile construction approach to record the user context and personalized applications. In this way, user needs can be tailored according to the user's dynamic interests by tracking all devices owned by the user.

Study on Decision-Making Factors of Big Data Application in Enterprises: Using Company S as an Example

  • Huang, Yun Kuei;Yang, Wen I.;Chan, Ching Sen
    • East Asian Journal of Business Economics (EAJBE)
    • /
    • v.4 no.1
    • /
    • pp.5-15
    • /
    • 2016
  • With vigorous development of global network community, smart phones and mobile devices, enterprises can rapidly collect various kinds of data from internal and external environments. How to discover valuable information and transform it into new business opportunities from big data which grow rapidly is an extremely important issue for current enterprises. This study treats Company S as the subject and tries to find the factors of big data application in enterprises by a modified Decision Making Trial and Evaluation Laboratory (DEMATEL) and perceived benefits - perceived barriers relation matrix as reference for big data application and management of managers or marketing personnel in other organizations or related industry.

Development of the Unified Database Design Methodology for Big Data Applications - based on MongoDB -

  • Lee, Junho;Joo, Kyungsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.3
    • /
    • pp.41-48
    • /
    • 2018
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Current implemented solutions are mainly based on relational database that are no longer adapted to these data volume. NoSQL solutions allow us to consider new approaches for data warehousing, especially from the multidimensional data management point of view. In this paper, we develop and propose the integrated design methodology based on MongoDB for big data applications. The proposed methodology is more scalable than the existing methodology, so it is easy to handle big data.

Development of the design methodology for large-scale database based on MongoDB

  • Lee, Jun-Ho;Joo, Kyung-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.11
    • /
    • pp.57-63
    • /
    • 2017
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement big data repositories. In this paper, we propose a design methodology for large-scale database based on MongoDB by extending the information engineering methodology based on E-R data model.

A Study on Exploring Factors Having Influenced on Silver Industry to Activate Senior Start-up : Using Big-Data (실버산업의 영향요인 탐색을 통한 시니어창업 활성화: 빅데이터(BIgData) 분석)

  • Park, Sang Kyu;Kang, Man Su;Son, Hee Young;Cho, Sung Hyun
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.11 no.6
    • /
    • pp.185-194
    • /
    • 2016
  • Recently, as the popularization of the mobile and the internet, the need of big data technology using a vast amount of data which contains the information has emerged. Big data technology has been used in various fields but use of the public sector is still insufficient. So, this study applies them. This study explores factors influencing silver industry as keywords, graving has effect on the present as well as future society. Results, five variables are 'silver Industry', 'senior citizen who lives alone', 'aging', 'birth' and 'retirement' were searched, and it was confirmed that they are correlated with one another. Results of analyzing the influence of the other four parameters on "Silver Industry", they have an effect significantly. In addition, it proposed the need of the 'providing living space of senior citizen who lives alone', 'childbirth support policy', 'support to vitalize silver startup senior manpower of technology' as an alternative to develop the silver industry. This study provided the theoretical implications that is exploring factors through a quantitative approach using big data and the practical implication is to suggest an alternative.

  • PDF

New Medical Image Fusion Approach with Coding Based on SCD in Wireless Sensor Network

  • Zhang, De-gan;Wang, Xiang;Song, Xiao-dong
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.6
    • /
    • pp.2384-2392
    • /
    • 2015
  • The technical development and practical applications of big-data for health is one hot topic under the banner of big-data. Big-data medical image fusion is one of key problems. A new fusion approach with coding based on Spherical Coordinate Domain (SCD) in Wireless Sensor Network (WSN) for big-data medical image is proposed in this paper. In this approach, the three high-frequency coefficients in wavelet domain of medical image are pre-processed. This pre-processing strategy can reduce the redundant ratio of big-data medical image. Firstly, the high-frequency coefficients are transformed to the spherical coordinate domain to reduce the correlation in the same scale. Then, a multi-scale model product (MSMP) is used to control the shrinkage function so as to make the small wavelet coefficients and some noise removed. The high-frequency parts in spherical coordinate domain are coded by improved SPIHT algorithm. Finally, based on the multi-scale edge of medical image, it can be fused and reconstructed. Experimental results indicate the novel approach is effective and very useful for transmission of big-data medical image(especially, in the wireless environment).

Hadoop Based Wavelet Histogram for Big Data in Cloud

  • Kim, Jeong-Joon
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.668-676
    • /
    • 2017
  • Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.

Study on Educational Utilization Methods of Big Data (빅데이터의 교육적 활용 방안 연구)

  • Lee, Youngseok;Cho, Jungwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.12
    • /
    • pp.716-722
    • /
    • 2016
  • In the recent rapidly changing IT environment, the amount of smart digital data is growing exponentially. As a result, in many areas, utilizing big data research and development services and related technologies is becoming more popular. In SMART learning, big data is used by students, teachers, parents, etc., from a perspective of the potential for many. In this paper, we describe big data and can utilize it to identify scenarios. Big data, obtained through customized learning services that can take advantage of the scheme, is proposed. To analyze educational big data processing technology for this purpose, we designed a system for big data processing. Education services offer the measures necessary to take advantage of educational big data. These measures were implemented on a test platform that operates in a cloud-based operations section for a pilot training program that can be applied properly. Teachers try using it directly, and in the interest of business and education, a survey was conducted based on enjoyment, the tools, and users' feelings (e.g., tense, worried, confident). We analyzed the results to lay the groundwork for educational use of big data.

The Creation and Placement of VMs and Tasks in Virtualized Hadoop Cluster Environments

  • Kim, Tae-Won;Chung, Hae-jin;Kim, Joon-Mo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.12
    • /
    • pp.1499-1505
    • /
    • 2012
  • Recently, the distributed processing system for big data has been actively investigated owing to the development of high speed network and storage technologies. In addition, virtual system that can provide efficient use of system resources through the consolidation of servers has been increasingly recognized. But, when we configure distributed processing system for big data in virtual machine environments, many problems occur. In this paper, we did an experiment on the optimization of I/O bandwidth according to the creation and placement of VMs and tasks with composing Hadoop cluster in virtual environments and evaluated the results of an experiment. These results conducted by this paper will be used in the study on the development of Hadoop Scheduler supporting I/O bandwidth balancing in virtual environments.