• Title/Summary/Keyword: map models

Search Result 717, Processing Time 0.04 seconds

Development of Regularized Expectation Maximization Algorithms for Fan-Beam SPECT Data (부채살 SPECT 데이터를 위한 정칙화된 기댓값 최대화 재구성기법 개발)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Soo-Jin;Kim, Kyeong-Min;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.464-472
    • /
    • 2005
  • Purpose: SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. Materials and Methods: The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam protection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. Results: for the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Conclusion: Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.

Bankruptcy Type Prediction Using A Hybrid Artificial Neural Networks Model (하이브리드 인공신경망 모형을 이용한 부도 유형 예측)

  • Jo, Nam-ok;Kim, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.79-99
    • /
    • 2015
  • The prediction of bankruptcy has been extensively studied in the accounting and finance field. It can have an important impact on lending decisions and the profitability of financial institutions in terms of risk management. Many researchers have focused on constructing a more robust bankruptcy prediction model. Early studies primarily used statistical techniques such as multiple discriminant analysis (MDA) and logit analysis for bankruptcy prediction. However, many studies have demonstrated that artificial intelligence (AI) approaches, such as artificial neural networks (ANN), decision trees, case-based reasoning (CBR), and support vector machine (SVM), have been outperforming statistical techniques since 1990s for business classification problems because statistical methods have some rigid assumptions in their application. In previous studies on corporate bankruptcy, many researchers have focused on developing a bankruptcy prediction model using financial ratios. However, there are few studies that suggest the specific types of bankruptcy. Previous bankruptcy prediction models have generally been interested in predicting whether or not firms will become bankrupt. Most of the studies on bankruptcy types have focused on reviewing the previous literature or performing a case study. Thus, this study develops a model using data mining techniques for predicting the specific types of bankruptcy as well as the occurrence of bankruptcy in Korean small- and medium-sized construction firms in terms of profitability, stability, and activity index. Thus, firms will be able to prevent it from occurring in advance. We propose a hybrid approach using two artificial neural networks (ANNs) for the prediction of bankruptcy types. The first is a back-propagation neural network (BPN) model using supervised learning for bankruptcy prediction and the second is a self-organizing map (SOM) model using unsupervised learning to classify bankruptcy data into several types. Based on the constructed model, we predict the bankruptcy of companies by applying the BPN model to a validation set that was not utilized in the development of the model. This allows for identifying the specific types of bankruptcy by using bankruptcy data predicted by the BPN model. We calculated the average of selected input variables through statistical test for each cluster to interpret characteristics of the derived clusters in the SOM model. Each cluster represents bankruptcy type classified through data of bankruptcy firms, and input variables indicate financial ratios in interpreting the meaning of each cluster. The experimental result shows that each of five bankruptcy types has different characteristics according to financial ratios. Type 1 (severe bankruptcy) has inferior financial statements except for EBITDA (earnings before interest, taxes, depreciation, and amortization) to sales based on the clustering results. Type 2 (lack of stability) has a low quick ratio, low stockholder's equity to total assets, and high total borrowings to total assets. Type 3 (lack of activity) has a slightly low total asset turnover and fixed asset turnover. Type 4 (lack of profitability) has low retained earnings to total assets and EBITDA to sales which represent the indices of profitability. Type 5 (recoverable bankruptcy) includes firms that have a relatively good financial condition as compared to other bankruptcy types even though they are bankrupt. Based on the findings, researchers and practitioners engaged in the credit evaluation field can obtain more useful information about the types of corporate bankruptcy. In this paper, we utilized the financial ratios of firms to classify bankruptcy types. It is important to select the input variables that correctly predict bankruptcy and meaningfully classify the type of bankruptcy. In a further study, we will include non-financial factors such as size, industry, and age of the firms. Thus, we can obtain realistic clustering results for bankruptcy types by combining qualitative factors and reflecting the domain knowledge of experts.

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

Estimating Fine Particulate Matter Concentration using GLDAS Hydrometeorological Data (GLDAS 수문기상인자를 이용한 초미세먼지 농도 추정)

  • Lee, Seulchan;Jeong, Jaehwan;Park, Jongmin;Jeon, Hyunho;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.919-932
    • /
    • 2019
  • Fine particulate matter (PM2.5) is not only affected by anthropogenic emissions, but also intensifies, migrates, decreases by hydrometeorological factors. Therefore, it is essential to understand relationships between the hydrometeorological factors and PM2.5 concentration. In Korea, PM2.5 concentration is measured at the ground observatories and estimated data are given to locations where observatories are not present. In this way, the data is not suitable to represent an area, hence it is impossible to know accurate concentration at such locations. In addition, it is hard to trace migration, intensification, reduction of PM2.5. In this study, we analyzed the relationships between hydrometeorological factors, acquired from Global Land Data Assimilation System (GLDAS), and PM2.5 by means of Bayesian Model Averaging (BMA). By BMA, we also selected factors that have meaningful relationship with the variation of PM2.5 concentration. 4 PM2.5 concentration models for different seasons were developed using those selected factors, with Aerosol Optical Depth (AOD) from MODerate resolution Imaging Spectroradiometer (MODIS). Finally, we mapped the result of the model, to show spatial distribution of PM2.5. The model correlated well with the observed PM2.5 concentration (R ~0.7; IOA ~0.78; RMSE ~7.66 ㎍/㎥). When the models were compared with the observed PM2.5 concentrations at different locations, the correlation coefficients differed (R: 0.32-0.82), although there were similarities in data distribution. The developed concentration map using the models showed its capability in representing temporal, spatial variation of PM2.5 concentration. The result of this study is expected to be able to facilitate researches that aim to analyze sources and movements of PM2.5, if the study area is extended to East Asia.

A Reliability Analysis of Shallow Foundations using a Single-Mode Performance Function (단일형 거동함수에 의한 얕은 기초의 신뢰도 해석 -임해퇴적층의 토성자료를 중심으로-)

  • 김용필;임병조
    • Geotechnical Engineering
    • /
    • v.2 no.1
    • /
    • pp.27-44
    • /
    • 1986
  • The measured soil data are analyzed to the descriptive statistics and classified into the four models of uncorrelated-normal (UNNO), uncorrelated-nonnormal (VNNN), correlatedonnormal(CONN), and correlated-nonnormal(CONN) . This paper presents the comparisons of reliability index and check points using the advanced first-order second-moment method with respect to the four models as well as BASIC Program. A sin91e-mode Performance function is consisted of the basic design variables of bearing capacity and settlements on shallow foundations and input the above analyzed soil informations. The main conclusions obtained in this study are summarized as follows: 1. In the bearing capacity mode, cohesion and bearing-capacity factors by C-U test are accepted for normal and lognormal distribution, respectively, and negatively low correlated to each other. Since the reliability index of the CONN model is the lowest one of the four model, which could be recommended a reliability.based design, whereas the other model might overestimate the geotechnical conditions. 2. In the case of settlements mode, the virgin compression ratio and preccnsolidation pressure are fitted for normal and lognormal distribution, respectively. Constraining settlements to the lower ones computed by deterministic method, The CONN model is the lowest reliability of the four models.

  • PDF

Understanding of the Duplex Thrust System - Application to the Yeongwol Thrust System, Taebaeksan Zone, Okcheon Belt (듀플렉스트러스트시스템의이해 - 옥천대태백산지역영월트러스트시스템에의 적용)

  • Jang, Yirang
    • Economic and Environmental Geology
    • /
    • v.52 no.5
    • /
    • pp.395-407
    • /
    • 2019
  • The duplex system has been considered as an important slip-transfer mechanism to evaluate the evolution of orogenic belts. Duplexes are generally found in the hinterland portion of fold-thrust belts and accommodate large amounts of total shortening. Thus, understanding its geometric and kinematic evolution can give information to evaluate the evolution of the entire orogenic belt. Duplexes are recognized as closed-loop thrust traces on map view, indicating higher connectivity than imbricate fans. As originally defined, a duplex is an array of thrust horses which are surrounded by thrust faults including the floor and roof thrusts, and imbricate faults between them. Duplexes can accommodate regional layer-parallel shortening and transfer slip from a floor thrust to a roof thrust. However, an imbricate fault is not the only mean for layer-parallel shortening (LPS) and displacement transfer within duplexes. LPS cleavages and detachment folds can also play the same role. From this aspect, a duplex can be divided into three types; 1) fault duplex, 2) cleavage duplex and 3) fold duplex. Fault duplex can further be subdivided into the Boyer-type duplex, which was firstly designed duplex system in the 1980s that widely applied most of the major fold-thrust belts in the world, and connecting splay duplex, which has different time order in the emplacement of horses from those of the Boyer-type. On the contrary, the cleavage and fold duplexes are newly defined types based on some selected examples. In the Korean Peninsula, the Yeongwol area, the western part of the Taebaeksan Zone of the Okcheon Belt, gives an excellent natural laboratory to study the structural geometry and kinematics of the closed-loops by thrust fault traces in terms of a duplex system. In the previous study, the Yeongwol thrust system was interpreted by alternative duplex models; a Boyer-type hinterland-dipping duplex vs. a combination of major imbricate thrusts and their connecting splays. Although the high angled beds and thrusts as well as different stratigraphic packages within the horses of the Yeongwol duplex system may prefer the later complicate model, currently, we cannot choose one simple answer between the models because of the lack of direct field evidence and time information. Therefore, further researches on the structural field investigations and geochronological analyses in the Yeongwol and adjacent areas should be carried out to test the possibility of applying the fold and cleavage duplex models to the Yeongwol thrust system, and it will eventually provide clues to solve the enigma of formation and its evolution of the Okcheon Belt.

Spatial Variability of Soil Moisture Content, Soil Penetration Resistance and Crop Yield on the Leveled Upland in the Reclaimed Highland (고령지 개간지 밭의 토양수분과 경도 및 작물수량의 공간변이성)

  • Park, Chol-Soo;Yang, Su-Chan;Lee, Gye-jun;Lee, Jeong-Tae;Kim, Hak-Min;Park, Sang-Hoo;Kim, Dae-Hoon;Jung, Ah-Yeong;Hwang, Seon-Woong
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.39 no.3
    • /
    • pp.123-135
    • /
    • 2006
  • Spatial variability and distribution map of soil properties and the relationships between soil properties and crop yields are not well characterized in agroecosystems that have been land leveled to facilitate more cultivation of the new reclaimed sloping highland. Potato, onion, carrot, Chinese cabbage and radish were grown on the coarse sandy loam soil in 2004. Soil moisture content, soil penetration resistance and crop yield were sampled in the $10m{\times}50m$ field consisted of five plots. Sampling sites of each cultivation plot were 33 for the soil moisture, 11 for the soil penetration and 33 for the crop yield. The results of semivariance analysis, most of models were shown spherical equation. The significant ranges of each spatial variability model for the soil moisture, soil penetration and crop yield were broad as 33-35 meters in the potato cultivation plot, and that in the Chinese cabbage cultivation plot was narrow as 5-6 meters. The coefficient of variances (C.V.) of moisture, penetration and yield were various from 14 to 59 percents in five cultivation plots. The highest C.V. of potato yield was 59 percents, and that of the radish cultivation plot was as low as 14 percents. The required sample numbers for the determination of soil moisture content, soil penetration resistance and crop yield with error 10% at 0.05 significant level were ranged 8-40 for soil moisture, 7-25 for soil penetration and 424-4,678 for crop yield. The variogram and distribution map by kriging described field characteristics well so that the spatial variability would be useful for soil management for better efficiency and precision agriculture in the reclaimed highland.

Development and Validation of a Learning Progression for Astronomical Systems Using Ordered Multiple-Choice Items (순위 선다형 문항을 이용한 천문 시스템 학습 발달과정 개발 및 타당화 연구)

  • Maeng, Seungho;Lee, Kiyoung;Park, Young-Shin;Lee, Jeong-A;Oh, Hyunseok
    • Journal of The Korean Association For Science Education
    • /
    • v.34 no.8
    • /
    • pp.703-718
    • /
    • 2014
  • This study sought to investigate learning progressions for astronomical systems which synthesized the motion and structure of Earth, Earth-Moon system, solar system, and the universe. For this purpose we developed ordered multiple-choice items, applied them to elementary and middle school students, and provided validity evidence based on the consequence of assessment for interpretation of learning progressions. The study was conducted according to construct modeling approach. The results showed that the OMCs were appropriate for investigating learning progressions on astronomical systems, i.e., based on item fit analysis, students' responses to items were consistent with the measurement of Rasch model. Wright map analysis also represented that the assessment items were very effective in examining students' hypothetical pathways of development of understanding astronomical systems. At the lower anchor of the learning progression, while students perceived the change of location and direction of celestial bodies with only two-dimensional earth-based view, they failed to connect the locations of celestial bodies with Earth-Moon system model, and they could recognized simple patterns of planets in the solar system and milky way. At the intermediate levels, students interpreted celestial motion using the model of Earth rotation and revolution, Earth-Moon system, and solar system with space-based view, and they could also relate the elements of astronomical structures with the models. At the upper anchor, students showed the perspective change between space-based view and earth-based view, and applied it to celestial motion of astronomical systems, and they understood the correlation among sub-elements of astronomical systems and applied it to the system model.