• Title/Summary/Keyword: Big6

Search Result 2,154, Processing Time 0.029 seconds

Current Status and Improvement of the Fast Imaging Solar Spectrograph of the 1.6m telescope at Big Bear Solar Observatory

  • Park, Hyungmin;Chae, Jongchul;Song, Donguk;Yang, Heesu;Jang, Bi-Ho;Park, Young-Deuk;Nah, Jakyoung;Cho, Kyung-Suk;Ahn, Kwangsu
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.112.2-112.2
    • /
    • 2012
  • For the study of fine-scale structure and dynamics in the solar chromosphere, the Fast Imaging Solar Spectrograph (FISS) was installed in 1.6m New Solar Telescope at Big Bear Solar Observatory in 2010. The instrument, installed at a vertical table of the Coude lab, is properly working and producing data for science. From the analysis of the data, however, we noticed that a couple of problems exist that deteriorate image quality : lower light level and poorer resolution of the CaII band data. After several tests, we found that the relay optics at the right position is crucial role for the spatial resolution of raster-scan images. By using resolution target, we re-aligned relay optics and other components of the spectrograph. Here we present the result of optical test and new data taken by the FISS.

  • PDF

Structuring of unstructured big data and visual interpretation (부산지역 교통관련 기사를 이용한 비정형 빅데이터의 정형화와 시각적 해석)

  • Lee, Kyeongjun;Noh, Yunhwan;Yoon, Sanggyeong;Cho, Youngseuk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1431-1438
    • /
    • 2014
  • We analyzed the articles from "Kukje Shinmun" and "Busan Ilbo", which are two local newpapers of Busan Metropolitan City. The articles cover from January 1, 2013 to December 31, 2013. Meaningful pattern inherent in 2889 articles of which the title includes "Busan" and "Traffic" and related data was analyzed. Textmining method, which is a part of datamining, was used for the social network analysis (SNA). HDFS and MapReduce (from Hadoop ecosystem), which is open-source framework based on JAVA, were used with Linux environment (Uubntu-12.04LTS) for the construction of unstructured data and the storage, process and the analysis of big data. We implemented new algorithm that shows better visualization compared with the default one from R package, by providing the color and thickness based on the weight from each node and line connecting the nodes.

A Meta-Analysis of Influencing Collagen Intake on Skin Utilizing Big Data (빅데이터 분석을 활용한 콜라겐 섭취가 피부에 미치는 영향에 관한 메타분석)

  • Jin, Chan-Yong;Yu, Ok-Kyeong;Nam, Soo-Tai
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.11
    • /
    • pp.2033-2038
    • /
    • 2016
  • Big data analysis, in the large amount of data stored as the data warehouse which it refers the process of discovering meaningful new correlations, patterns, trends and creating new values. The important issue of a meta-analysis is not the significance test, the effect size of the predictor variable on the criterion variable. We reviewed a total of 236 samples among 6 studies published on the topic related Collagen intake on skin between 2000 and 2016 in Korea. The results of the study are summarized as follows. First, we concluded that the path between before and after of Sebum (SB) had the largest effect size of (r = .416) Therefore, the effect of the Collagen intake intervention showed an explanatory power of 17 (%) about. Next, the path between before and after of Moisture (MS) had the higher the effect size of (r = .318). Thus, we present the theoretical and practical implications of these results.

Anomaly Detection of Hadoop Log Data Using Moving Average and 3-Sigma (이동 평균과 3-시그마를 이용한 하둡 로그 데이터의 이상 탐지)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae;Won, Hee-Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.283-288
    • /
    • 2016
  • In recent years, there have been many research efforts on Big Data, and many companies developed a variety of relevant products. Accordingly, we are able to store and analyze a large volume of log data, which have been difficult to be handled in the traditional computing environment. To handle a large volume of log data, which rapidly occur in multiple servers, in this paper we design a new data storage architecture to efficiently analyze those big log data through Apache Hive. We then design and implement anomaly detection methods, which identify abnormal status of servers from log data, based on moving average and 3-sigma techniques. We also show effectiveness of the proposed detection methods by demonstrating that our methods identifies anomalies correctly. These results show that our anomaly detection is an excellent approach for properly detecting anomalies from Hadoop log data.

Implementation of Cloud-Based Artificial Intelligence Education Platform (클라우드 기반 인공지능 교육 플랫폼 구현)

  • Wi, Woo-Jin;Moon, Hyung-Jin;Ryu, Gab-Sang
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.6
    • /
    • pp.85-92
    • /
    • 2022
  • Demand for big data analysis and AI developers is increasing, but there is a lack of an education base to supply them. In this paper, by developing a cloud-based artificial intelligence education platform, the goal was to establish an environment in which practical practical training can be efficiently learned at low cost at educational institutions and IT companies. The development of the education platform was carried out by planning scenarios for each user, architecture design, screen design, implementation of development functions, and hardware construction. This training platform consists of a containerized workload, service management platform, lecture and development platform for instructors and students, and secured cloud stability through real-time alarm system and age test, CI/CD development environment, and reliability through docker image distribution. The development of this education platform is expected to expand opportunities to enter new businesses in the education field and contribute to fostering working-level human resources in the AI and big data fields.

Long-gap Filling Method for the Coastal Monitoring Data (해양모니터링 자료의 장기결측 보충 기법)

  • Cho, Hong-Yeon;Lee, Gi-Seop;Lee, Uk-Jae
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.333-344
    • /
    • 2021
  • Technique for the long-gap filling that occur frequently in ocean monitoring data is developed. The method estimates the unknown values of the long-gap by the summation of the estimated trend and selected residual components of the given missing intervals. The method was used to impute the data of the long-term missing interval of about 1 month, such as temperature and water temperature of the Ulleungdo ocean buoy data. The imputed data showed differences depending on the monitoring parameters, but it was found that the variation pattern was appropriately reproduced. Although this method causes bias and variance errors due to trend and residual components estimation, it was found that the bias error of statistical measure estimation due to long-term missing is greatly reduced. The mean, and the 90% confidence intervals of the gap-filling model's RMS errors are 0.93 and 0.35~1.95, respectively.

A Systematic Review of Toxicological Studies to Identify the Association between Environmental Diseases and Environmental Factors (환경성질환과 환경유해인자의 연관성을 규명하기 위한 독성 연구 고찰)

  • Ka, Yujin;Ji, Kyunghee
    • Journal of Environmental Health Sciences
    • /
    • v.47 no.6
    • /
    • pp.505-512
    • /
    • 2021
  • Background: The occurrence of environmental disease is known to be associated with chronic exposure to toxic chemicals, including waterborne contaminants, air/indoor pollutants, asbestos, ingredients in humidifier disinfectants, etc. Objectives: In this study, we reviewed toxicological studies related to environmental disease as defined by the Environmental Health Act in Korea and toxic chemicals. We also suggested a direction for future toxicological research necessary for the prevention and management of environmental disease. Methods: Trends in previous studies related to environmental disease were investigated through PubMed and Web of Science. A detailed review was provided on toxicological studies related to the humidifier disinfectants. We identified adverse outcome pathways (AOPs) that can be linked to the induction of environmental diseases, and proposed a chemical screening system that uses AOP, chemical toxicity big data, and deep learning models to select chemicals that induce environmental disease. Results: Research on chemical toxicity is increasing every year, but there is a limitation to revealing a clear causal relationship between exposure to chemicals and the occurrence of environmental disease. It is necessary to develop various exposure- and effect-biomarkers related to disease occurrence and to conduct toxicokinetic studies. A novel chemical screening system that uses AOP and chemical toxicity big data could be useful for selecting chemicals that cause environmental diseases. Conclusions: From a toxicological point of view, developing AOP related to environmental diseases and a deep learning-based chemical screening system will contribute to the prevention of environmental diseases in advance.

A study on the User Experience at Unmanned Checkout Counter Using Big Data Analysis (빅데이터를 활용한 편의점 간편식에 대한 의미 분석)

  • Kim, Ae-sook;Ryu, Gi-hwan;Jung, Ju-hee;Kim, Hee-young
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.375-380
    • /
    • 2022
  • The purpose of this study is to find out consumers' perception and meaning of convenience store convenience food by using big data. For this study, NNAVER and Daum analyzed news, intellectuals, blogs, cafes, intellectuals(tips), and web documents, and used 'convenience store convenience food' as keywords for data search. The data analysis period was selected as 3 years from January 1, 2019 to December 31, 2021. For data collection and analysis, frequency and matrix data were extracted using TEXTOM, and network analysis and visualization analysis were conducted using the NetDraw function of the UCINET 6 program. As a result, convenience store convenience foods were clustered into health, diversity, convenience, and economy according to consumers' selection attributes. It is expected to be the basis for the development of a new convenience menu that pursues convenience and convenience based on consumers' meaning of convenience store convenience foods such as appropriate prices, discount coupons, and events.

A Study on Social Perception of Young Children with Disabilities through Social Media Big Data Analysis (소셜 미디어 빅데이터 분석을 통한 장애 유아에 대한 사회적 인식 연구)

  • Kim, Kyoung-Min
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.1-12
    • /
    • 2022
  • The purpose of this study is to identify the social perception characteristics of young children with disabilities over the past decade. For this purpose, Textom, an Internet-based big data analysis system was used to collect data related to young children with disabilities posted on social media. 50 keywords were selected in the order of high frequency through the data cleaning process. For semantic network analysis, centrality analysis and CONCOR analysis were performed with UCINET6, and the analyzed data were visualized using NetDraw. As a result, the keywords such as 'education, needs, parents, and inclusion' ranked high in frequency, degree, and eigenvector centrality. In addition, the keywords of 'parent, teacher, problem, program, and counseling' ranked high in betweenness centrality. In CONCOR analysis, four clusters were formed centered on the keywords of 'disabilities, young child, diagnosis, and programs'. Based on these research results, the topics on social perception of young children with disabilities were investigated, and implications for each topic were discussed.

Deep Learning-based Material Object Recognition Research for Steel Heat Treatment Parts (딥러닝 기반 객체 인식을 통한 철계 열처리 부품의 인지에 관한 연구)

  • Hye-Jung, Park;Chang-Ha, Hwang;Sang-Gwon, Kim;Kuk-Hyun, Yeo;Sang-Woo, Seo
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.35 no.6
    • /
    • pp.327-336
    • /
    • 2022
  • In this study, a model for automatically recognizing several steel parts through a camera before charging materials was developed under the assumption that the temperature distribution in the pre-air atmosphere was known. For model development, datasets were collected in random environments and factories. In this study, the YOLO-v5 model, which is a YOLO model with strengths in real-time detection in the field of object detection, was used, and the disadvantages of taking a lot of time to collect images and learning models was solved through the transfer learning methods. The performance evaluation results of the derived model showed excellent performance of 0.927 based on mAP 0.5. The derived model will be applied to the model development study, which uses the model to accurately recognize the material and then match it with the temperature distribution in the atmosphere to determine whether the material layout is suitable before charging materials.