• Title/Summary/Keyword: 빅데이터 기법

Search Result 780, Processing Time 0.035 seconds

Development of Real-time Rainfall Sensor Rainfall Estimation Technique using Optima Rainfall Intensity Technique (Optima Rainfall Intensity 기법을 이용한 실시간 강우센서 강우 산정기법 개발)

  • Lee, Byung Hun;Hwang, Sung Jin;Kim, Byung Sik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.429-429
    • /
    • 2019
  • 최근 들어 이상기후 등 다양한 환경적 요인으로 인해 국지적이고 집중적인 호우가 빈발하고 있으며 도로상의 교통체증과 도로재해가 사회적으로 큰 문제가 되고 있다. 이러한 문제를 해결하기 위해서는 실시간, 단기간 이동성 강우정보 기술과 도로 기상정보를 활용할 수 있는 방법에 대한 연구가 필요하다. 본 연구는 차량의 AW(AutoWiping) 기능을 위해 장착된 강우센서를 이용하여 강우정보를 생산하는 기술을 개발하고자 하였다. 강우센서는 총 4개의 채널로 이루어져있고, 초당 250개의 광신호 데이터를 수집하며, 1시간이면 약 360만 개의 데이터가 생산되게 된다. 5단계의 인공강우를 재현하여 실내 인공강우실험을 실시하고 이를 통해 강우센서 데이터와 강우량과의 상관성을 W-S-R관계식으로 정의하였다. 실내실험데이터와 비교하여 외부환경 및 데이터 생성조건이 다른 실외 데이터의 누적값을 계산하기 위해 Threshold Map 방식을 개발하였다. 강우센서에서 생산되는 대량의 데이터를 이용하여 실시간으로 정확한 강우정보를 생산하기 위해 빅 데이터 처리기법을 사용하여 계산된 실내 데이터의 Threshold를 강우강도 및 채널에 따라 평균값을 계산하고 $4{\times}5$ Threshold Map(4 = 채널, 5 = 강우정보 사상)을 생성하였고 강우센서 기반의 강우정보 생산에 적합한 빅데이터 처리기법을 선정하기 위하여 빅데이터 처리기법 중 Gradient Descent와 Optima Rainfall Intensity을 적용하여 분석하고 결과를 지상 관측강우와 비교검증을 하였다. 이 결과 Optima Rainfall Intensity의 적합도를 검증하였고 실시간으로 관측한 8개 강우사상을 대상으로 강우센서 강우를 생산하였다.

  • PDF

For airline preferences of consumers Big Data Convergence Based Marketing Strategy (소비자의 항공사 선호도에 대한 빅데이터 융합 기반 마케팅 전략)

  • Chun, Yong-Ho;Lee, Seung-Joon;Park, Su-Hyeon
    • Journal of Industrial Convergence
    • /
    • v.17 no.3
    • /
    • pp.17-22
    • /
    • 2019
  • As the value of big data is recognized as important, it is possible to advance decision making by effectively introducing and improving the development and utilization of JAVA and R programs that can analyze vast amounts of existing and unstructured data to governments, public institutions and private businesses. In this study, news data was collated and analyzed through text mining techniques in order to establish marketing strategies based on consumers' airline preferences. This research is meaningful in establishing marketing strategies based on analysis results by analyzing consumers' airline preferences using high-level big data utilization program techniques for data that were difficult to obtain in the past.

Real Time Stock Information Analysis Method Based on Big Data considering Reliability (신뢰성을 고려한 빅데이터 기반 실시간 증권정보 분석 기법)

  • Kim, Yoon-Ki;Cho, Chang-Woo;Jeong, Chang-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.146-147
    • /
    • 2013
  • 소셜 미디어와 스마트폰의 확산으로 인터넷상의 사용자간 교류되는 정보의 양이 대폭 늘어남에 따라 대규모의 데이터를 처리해야할 필요성이 높아졌다. 이러한 빅데이터는 뉴스, 소셜미디어, 웹사이트 등의 다양한 분산 서버에서 발생한다. 증권정보를 분석하기 위해서도 실시간으로 발생되는 거래량, 시가와 더불어 상장회사의 공시 정보 등의 데이터를 여러 분산된 서버에서 데이터를 가져와야 한다. 기존의 빅데이터 분석기법은 각 분산된 서버로부터 가져온 데이터가 동일한 신뢰성을 가지고 있다고 가정하고 분석을 한다. 이는 부문별한 정보를 포함한 데이터를 효율적으로 분석하지 못하는 한계를 지니고 있다. 본 논문에서는 가져오는 데이터에 신뢰성 가중치를 부여하여 신뢰성 있는 증권정보 분석을 가능하게 한다.

Big Data Processing Scheme of Distribution Environment (분산환경에서 빅 데이터 처리 기법)

  • Jeong, Yoon-Su;Han, Kun-Hee
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.311-316
    • /
    • 2014
  • Social network server due to the popularity of smart phones, and data stored in a big usable access data services are increasing. Big Data Big Data processing technology is one of the most important technologies in the service, but a solution to this minor security state. In this paper, the data services provided by the big -sized data is distributed using a double hash user to easily access to data of multiple distributed hash chain based data processing technique is proposed. The proposed method is a kind of big data data, a function, characteristics of the hash chain tied to a high-throughput data are supported. Further, the token and the data node to an eavesdropper that occurs when the security vulnerability to the data attribute information to the connection information by utilizing hash chain of big data access control in a distributed processing.

A Study on the Strategy of the Use of Big Data for Cost Estimating in Construction Management Firms based on the SWOT Analysis (SWOT분석을 통한 CM사 견적업무 빅데이터 활용전략에 관한 연구)

  • Kim, Hyeon Jin;Kim, Han Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.2
    • /
    • pp.54-64
    • /
    • 2022
  • Since the interest in big data is growing exponentially, various types of research and development in the field of big data have been conducted in the construction industry. Among various application areas, cost estimating can be a topic where the use of big data provides positive benefits. In order for firms to make efficient use of big data for estimating tasks, they need to establish a strategy based on the multifaceted analysis of internal and external environments. The objective of the study is to develop and propose a strategy of the use of big data for construction management(CM) firms' cost estimating tasks based on the SWOT analysis. Through the combined efforts of literature review, questionnaire survey, interviews and the SWOT analysis, the study suggests that CM firms need to maintain the current level of the receptive culture for the use of big data and expand incrementally information resources. It also proposes that they need to reinforce the weak areas including big data experts and practice infrastructure for improving the big data-based cost estimating.

Multi-Attribute based on Data Management Scheme in Big Data Environment (빅 데이터 환경에서 다중 속성 기반의 데이터 관리 기법)

  • Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of Digital Convergence
    • /
    • v.13 no.1
    • /
    • pp.263-268
    • /
    • 2015
  • Put your information in the object-based sensors and mobile networks has been developed that correlate with ubiquitous information technology as the development of IT technology. However, a security solution is to have the data stored in the server, what minimal conditions. In this paper, we propose a data management method is applied to a hash chain of the properties of the multiple techniques to the data used by the big user and the data services to ensure safe handling large amounts of data being provided in the big data services. Improves the safety of the data tied to the hash chain for the classification to classify the attributes of the data attribute information according to the type of data used for the big data services, functions and characteristics of the proposed method. Also, the distributed processing of big data by utilizing the access control information of the hash chain to connect the data attribute information to a geographically dispersed data easily accessible techniques are proposed.

In-Memory Based Incremental Processing Method for Stream Query Processing in Big Data Environments (빅데이터 환경에서 스트림 질의 처리를 위한 인메모리 기반 점진적 처리 기법)

  • Bok, Kyoungsoo;Yook, Misun;Noh, Yeonwoo;Han, Jieun;Kim, Yeonwoo;Lim, Jongtae;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.2
    • /
    • pp.163-173
    • /
    • 2016
  • Recently, massive amounts of stream data have been studied for distributed processing. In this paper, we propose an incremental stream data processing method based on in-memory in big data environments. The proposed method stores input data in a temporary queue and compare them with data in a master node. If the data is in the master node, the proposed method reuses the previous processing results located in the node chosen by the master node. If there are no previous results of data in the node, the proposed method processes the data and stores the result in a separate node. We also propose a job scheduling technique considering the load and performance of a node. In order to show the superiority of the proposed method, we compare it with the existing method in terms of query processing time. Our experimental results show that our method outperforms the existing method in terms of query processing time.

Interoperability between NoSQL and RDBMS via Auto-mapping Scheme in Distributed Parallel Processing Environment (분산병렬처리 환경에서 오토매핑 기법을 통한 NoSQL과 RDBMS와의 연동)

  • Kim, Hee Sung;Lee, Bong Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2067-2075
    • /
    • 2017
  • Lately big data processing is considered as an emerging issue. As a huge amount of data is generated, data processing capability is getting important. In processing big data, both Hadoop distributed file system and unstructured date processing-based NoSQL data store are getting a lot of attention. However, there still exists problems and inconvenience to use NoSQL. In case of low volume data, MapReduce of NoSQL normally consumes unnecessary processing time and requires relatively much more data retrieval time than RDBMS. In order to address the NoSQL problem, in this paper, an interworking scheme between NoSQL and the conventional RDBMS is proposed. The developed auto-mapping scheme enables to choose an appropriate database (NoSQL or RDBMS) depending on the amount of data, which results in fast search time. The experimental results for a specific data set shows that the database interworking scheme reduces data searching time by 35% at the maximum.

Analysis of the Effectiveness of Big Data-Based Six Sigma Methodology: Focus on DX SS (빅데이터 기반 6시그마 방법론의 유효성 분석: DX SS를 중심으로)

  • Kim Jung Hyuk;Kim Yoon Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Over recent years, 6 Sigma has become a key methodology in manufacturing for quality improvement and cost reduction. However, challenges have arisen due to the difficulty in analyzing large-scale data generated by smart factories and its traditional, formal application. To address these limitations, a big data-based 6 Sigma approach has been developed, integrating the strengths of 6 Sigma and big data analysis, including statistical verification, mathematical optimization, interpretability, and machine learning. Despite its potential, the practical impact of this big data-based 6 Sigma on manufacturing processes and management performance has not been adequately verified, leading to its limited reliability and underutilization in practice. This study investigates the efficiency impact of DX SS, a big data-based 6 Sigma, on manufacturing processes, and identifies key success policies for its effective introduction and implementation in enterprises. The study highlights the importance of involving all executives and employees and researching key success policies, as demonstrated by cases where methodology implementation failed due to incorrect policies. This research aims to assist manufacturing companies in achieving successful outcomes by actively adopting and utilizing the methodologies presented.

Design of Efficient Big Data Collection Method based on Mass IoT devices (방대한 IoT 장치 기반 환경에서 효율적인 빅데이터 수집 기법 설계)

  • Choi, Jongseok;Shin, Yongtae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.300-306
    • /
    • 2021
  • Due to the development of IT technology, hardware technologies applied to IoT equipment have recently been developed, so smart systems using low-cost, high-performance RF and computing devices are being developed. However, in the infrastructure environment where a large amount of IoT devices are installed, big data collection causes a load on the collection server due to a bottleneck between the transmitted data. As a result, data transmitted to the data collection server causes packet loss and reduced data throughput. Therefore, there is a need for an efficient big data collection technique in an infrastructure environment where a large amount of IoT devices are installed. Therefore, in this paper, we propose an efficient big data collection technique in an infrastructure environment where a vast amount of IoT devices are installed. As a result of the performance evaluation, the packet loss and data throughput of the proposed technique are completed without loss of the transmitted file. In the future, the system needs to be implemented based on this design.