• Title/Summary/Keyword: Structure Learning

Search Result 2,209, Processing Time 0.032 seconds

International Research Trends in Science-Related Risk Education: A Bibliometric Analysis (상세 서지분석을 통한 과학과 관련된 위험 교육의 국제 연구 동향 분석)

  • Wonbin Jang;Minchul Kim
    • Journal of Science Education
    • /
    • v.48 no.2
    • /
    • pp.75-90
    • /
    • 2024
  • Contemporary society faces increasingly diverse risks with expanding impacts. In response, the importance of science education has become more prominent. This study aims to analyze the characteristics of existing research on science-related risk education and derives implications for such education. Using detailed bibliometric analysis, we collected citation data from 83 international scholarly journals (SSCI) in the field of education indexed in the Web of Science with the keywords 'Scientific Risk.' Subsequently, using the bibliometrix package in R-Studio, we conducted a bibliometric analysis. The findings are as follows. Firstly, research on risk education covers topics such as risk literacy, the structure of risks addressed in science education, and the application and effectiveness of incorporating risk cases into educational practices. Secondly, a significant portion of research on risks related to science education has been conducted within the framework of socioscientific issues (SSI) education. Thirdly, it was observed that research on risks related to science education primarily focuses on the transmission of scientific knowledge, with many studies examining formal education settings such as curricula and school learning environments. These findings imply several key points. Firstly, to effectively address risks in contemporary society, the scope of risk education should extend beyond topics such as nuclear energy and climate change to encompass broader issues like environmental pollution, AI, and various aspects of daily life. Secondly, there is a need to reexamine and further research topics explored in the context of SSI education within the framework of risk education. Thirdly, it is necessary to analyze not only risk perception but also risk assessment and risk management. Lastly, there is a need for research on implementing risk education practices in informal educational settings, such as science museums and media.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Anatomical Studies of Major Tree Barks Grown in Korea - II. Anatomy of Quercus Barks (한국산(韓國産) 주요수피(主要樹皮)의 해부학적(解剖學的) 연구(硏究) - 제2보(第二報) 참나무속(屬) 수피(樹皮)의 해부(解剖))

  • Lee, Hwa-Hyoung;Lee, Phil-Woo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.5 no.1
    • /
    • pp.3-8
    • /
    • 1977
  • A bark comprises about 10 to 20 percents of a typical log by volume, and is generally considered as an unwanted residue rather than a potentially valuable resource. As the world has been confronted with decreasing forest resources, natural resources pressure dictate that a bark should be a raw material instead of a waste. The utilization of the largely wasted bark of genus Quercus grown in Korea can be enhanced by learning its anatomical structure and properties. In this paper, bark characteristics of Quercus grown in Korea are described. In bark anatomy, general features such as color of rhytidome, exfoliating form, color of periderm, arrangement of periderm, and thickness of the inner and outer hark. etc., arc discussed. Studies on the microscopic structure include sieve tube, companion cell, parenchyma, pholem fiber, ray, periderm(phelloderm, phelloogen, phellem), sclereid, and crystal, etc. The results may be summarized as follows: 1. In general characteristics of rhytidomes, exfoliating is not easy and sclereids are distint to the naked eye. Inner bark is thicker than that of outer bark except in case of Q. variabilis. 2. It is not clear to distinguish between phelloderm and phellogen in Quercus bark. The phellem is developed conspicuously in Q. variabilis but that of Q. accutissima is composed of thinwalled phellem and thickwalled stone cell. 3. Quercus Bark has sieve tube, companion cell, phloem fiber and sclereid. Sclereids of Quercus bark are the most distinguished characteristics comparing with pinus and populus. The volume percent of sclereids are higher than that of fiber. 4. Rays are 1~3 seriate, and multiseriate ranging with from 15 to 20. 5. Parenchyma cell contains two types, polygonal and druses crystal.

  • PDF

Study on The Chinese Poems Composed by Mi-Am Yu Hee Choon (미암(眉巖) 유희춘(柳希春)의 한시(漢詩) 연구(硏究))

  • Song, Jae-yong
    • (The)Study of the Eastern Classic
    • /
    • no.57
    • /
    • pp.383-406
    • /
    • 2014
  • Mi-Am Yu Hee Choon (1513~1577) considered poetry as a part of his life. Therefore, this writer specifically focused on Mi-Am Yu Hee Choon's Chinese poems. The following is the conclusion from the materials discussed in this article. Mi-Am tried to understand literature in ethical perspective. The number of Chinese poems composed by Mi-Am is estimated to be about 300, and the number of pieces that this writer could find was 285. Also, Mi-Am took poem composition seriously, and put emphasis on content more than structure. Among Go Shi, Yul Shi, and Jul gu, Jul gu (especially Chil Un) is the largest in quantity, and it is presumed that he preferred Chil(seven) Un over Oh(five) Un. With regards to Go Shi, there are relatively many Jeon-Go. With regards to Jul gu, which was a poetry composing structure that Mi-Am could make the best use of, they were mostly about the daily lives. And with regards to Yul Shi, there were many poems that expressed his feelings about the real world and self-examination. Mi-Am's poems can be categorized into ones that he wrote when he was on exile, and ones that he wrote while serving for the king again after he got released from exile. During the exile period, self-discipline through learning, friendship, and love for the people were the main themes of his poems, and after being released and started serving for the king again, his poems were mostly about loyalty to the king, interaction with acquaintances, emotions, ancestor worship, self-examination, and conjugal affection through literary communion. Among Mi-Am's poems, there are many that have Eum Song Cha Un included in their titles, and the mainstream of his poems were related to daily lives or experiences. Also, most of them naturally and calmly expressed the fact itself without exaggerating. Mi-Am considered poetry as a part of his life and the fact that he practiced literary communion with his wife by writing poems about the ordinary things happened between him and his wife, Song Duk Bong, is worthy of notice.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Development and Application of Earth Science Module Based on Earth System (지구계 주제 중심의 지구과학 모듈 개발 및 적용)

  • Lee, Hyo-Nyong;Kwon, Young-Ryun
    • Journal of the Korean earth science society
    • /
    • v.29 no.2
    • /
    • pp.175-188
    • /
    • 2008
  • The purposes of this study were to develop an Earth systems-based earth science module and to investigate the effects of field application. The module was applied to two classrooms of a total of 76 second-year high schoolers, in order to investigate the effectiveness of the developed module. Data was collected from observations in earth science classrooms, interviews, and questionnaires. The findings were as follows. First, the Earth systems-based earth science module was designed to be associated with the aims of the national Earth Science Curriculum and to improve students' Earth science literacy. The module was composed of two sections for a total of seven instructional hours for high schoolers. The former sections included the understanding of the Earth system through the understanding of each individual component of the system, its characteristics, properties and structure. The latter section of the module, consisting of 4 instructional hours, dealt with earth environmental problems, the understanding of subsystems changing through natural processes and cycles, and human interactions and their effects upon Earth systems. Second, the module was helpful in learning about the importance of understanding the interactions between water, rock, air, and life when it comes to understanding the Earth system, its components, characteristics, and properties. The Earth systems-based earth science module is a valuable and helpful instructional material which can enhance students' understanding of Earth systems and earth science literacy.

Analyzing Contextual Polarity of Unstructured Data for Measuring Subjective Well-Being (주관적 웰빙 상태 측정을 위한 비정형 데이터의 상황기반 긍부정성 분석 방법)

  • Choi, Sukjae;Song, Yeongeun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.83-105
    • /
    • 2016
  • Measuring an individual's subjective wellbeing in an accurate, unobtrusive, and cost-effective manner is a core success factor of the wellbeing support system, which is a type of medical IT service. However, measurements with a self-report questionnaire and wearable sensors are cost-intensive and obtrusive when the wellbeing support system should be running in real-time, despite being very accurate. Recently, reasoning the state of subjective wellbeing with conventional sentiment analysis and unstructured data has been proposed as an alternative to resolve the drawbacks of the self-report questionnaire and wearable sensors. However, this approach does not consider contextual polarity, which results in lower measurement accuracy. Moreover, there is no sentimental word net or ontology for the subjective wellbeing area. Hence, this paper proposes a method to extract keywords and their contextual polarity representing the subjective wellbeing state from the unstructured text in online websites in order to improve the reasoning accuracy of the sentiment analysis. The proposed method is as follows. First, a set of general sentimental words is proposed. SentiWordNet was adopted; this is the most widely used dictionary and contains about 100,000 words such as nouns, verbs, adjectives, and adverbs with polarities from -1.0 (extremely negative) to 1.0 (extremely positive). Second, corpora on subjective wellbeing (SWB corpora) were obtained by crawling online text. A survey was conducted to prepare a learning dataset that includes an individual's opinion and the level of self-report wellness, such as stress and depression. The participants were asked to respond with their feelings about online news on two topics. Next, three data sources were extracted from the SWB corpora: demographic information, psychographic information, and the structural characteristics of the text (e.g., the number of words used in the text, simple statistics on the special characters used). These were considered to adjust the level of a specific SWB. Finally, a set of reasoning rules was generated for each wellbeing factor to estimate the SWB of an individual based on the text written by the individual. The experimental results suggested that using contextual polarity for each SWB factor (e.g., stress, depression) significantly improved the estimation accuracy compared to conventional sentiment analysis methods incorporating SentiWordNet. Even though literature is available on Korean sentiment analysis, such studies only used only a limited set of sentimental words. Due to the small number of words, many sentences are overlooked and ignored when estimating the level of sentiment. However, the proposed method can identify multiple sentiment-neutral words as sentiment words in the context of a specific SWB factor. The results also suggest that a specific type of senti-word dictionary containing contextual polarity needs to be constructed along with a dictionary based on common sense such as SenticNet. These efforts will enrich and enlarge the application area of sentic computing. The study is helpful to practitioners and managers of wellness services in that a couple of characteristics of unstructured text have been identified for improving SWB. Consistent with the literature, the results showed that the gender and age affect the SWB state when the individual is exposed to an identical queue from the online text. In addition, the length of the textual response and usage pattern of special characters were found to indicate the individual's SWB. These imply that better SWB measurement should involve collecting the textual structure and the individual's demographic conditions. In the future, the proposed method should be improved by automated identification of the contextual polarity in order to enlarge the vocabulary in a cost-effective manner.

Exploration of Features of Korean Eighth Grade Students' Achievement and Curriculum Matching in TIMSS 2015 Earth Science (TIMSS 2015 중학교 2학년 지구과학 영역에 대한 우리나라 학생들의 성취 특성 및 교육과정 연계성 탐색)

  • Kwak, Youngsun
    • Journal of The Korean Association For Science Education
    • /
    • v.37 no.1
    • /
    • pp.9-16
    • /
    • 2017
  • The result of TIMSS 2015 was announced at the end of 2016. In this research, we conducted test-curriculum matching analysis for 8th grade earth science and analyzed Korean students' percentage of correct answers and responses for TIMSS earth science test items. According to the results, Korean students showed high percentage of correct answers when the item topics are covered in the 2009 revised science curriculum, and Korean students revealed their weakness in constructed response items since the percentage for correct answers on constructed response items is half that of multiple choice items. Depending on the earth science topic, for 'solid earth' area, which includes earth's structure and physical features, as well as earth's processes and history, students showed high percentage of correct answers for multiple choice items. Students, however, showed low percentage of correct answers for items that require applying knowledge to everyday situations and connecting with other areas of science such as biology. For 'atmosphere and ocean' areas, which include earth's processes and cycles, students showed low percentage of scores for climate comparison between regions, features of global warming, etc. For the area of 'universe', students showed high percentage of scores for the earth's rotation and revolution, the moon's gravity, and so on because they have learned these topics since primary school. Discussed in the conclusion are ways to secure content connection between the primary and middle school earth science curriculums, ways to develop students' science-inquiry related competencies, and so on to improve middle school earth science curriculum as well as teaching and learning.