The site effects of seismic stations were evaluated by conducting a simultaneous inversion of the stochastic point-source ground-motion model (STGM model; Boore, 2003) parameters based on the accumulated dataset of horizontal shear-wave Fourier spectra. A model parameter $K_0$ and frequency-dependent site amplification function A(f) were used to express the site effects. Once after a H/V ratio of the Fourier spectra was used as an initial estimate of A(f) for the inversion, the final A(f) which is considered to be the result of combined effect of the crustal amplification and loca lsite effects was calculated by averaging the log residuals at the site from the inversion and adding the mean log residual to the H/V ratio. The seismic stations were classified into five classes according to $logA_{1-10}^{max}$(f), the maximum level of the site amplification function in the range of 1 Hz < f < 10 Hz, i.e., A: $logA_{1-10}^{max}$(f) < 0.2, B: 0.2 $\leq$$logA_{1-10}^{max}$(f) < 0.4, C: 0.4 $\leq$$logA_{1-10}^{max}$(f) < 0.6, D: 0.6 $\leq$$logA_{1-10}^{max}$(f) < 0.8, E: 0.8 $\leq$$logA_{1-10}^{max}$(f). Implication of the classified result was supported by observing a shift of the dominant frequency of average A(f) for each classified stations as the class changes. Change of site classes after moving seismic stations to a better site condition was successfully described by the result of the station classification. In addition, the observed PGA (Peak Ground Acceleration)-values for two recent moderate earthquakes were well classified according to the proposed station classes.
This study proviedes GARCH model(Bollerslev, 1986) to analyze the structural characteristics of price volatility in domestic aquacultural fish market of Korea. As a case study, flatfish and rock-fish are analyzed as major species with relatively high portion in an aspect of production volume among fish captured in Korea. For analyzing, this study uses daily market data (dating from Jan 1 2000 to June 30, 2008) published by the Noryangjin Fisheries Wholesale Market which is located in Seoul of Korea. This study performs normality test on trading volume and price volatility of flatfish and rock-fish as an advanced empirical approach. The normality test adopted is Jarque-Bera test statistic. As a result, first, a null hypothesis that "an empirical distribution follows normal distribution" was rejected in both fishes. The distribution of daily market data of them were not only biased toward positive(+) direction in terms of kurtosis and skewness, but also characterized by leptokurtic distribution with long right tail. Secondly, serial correlations were found in data on market trading volume and price volatility of two species during very long period. Thirdly, the results of unit root test and ARCH-LM test showed that all data of time series were very stationary and demonstrated effects of ARCH. These statistical characteristics can be explained as a reasonable ground for supporting the fitness of GARCH model in order to estimate conditional variances that reveal price volatility in empirical analysis. From empirical data analysis above, this study drew the following conclusions. First of all, from an empirical analysis on potential effects of seasonality and the day of week on price volatility of aquacultural fish, Monday effects were found in both species and Thursday and Friday effects were also found in flatfish. This indicates that Monday is effective in expanding price volatility of aquacultural fish market and also Monday has higher effects upon the price volatility of fish than other days of week have since it has more new information for weekend. Secondly, the empirical analysis led to a common conclusion that there was very high price volatility of flatfish and rock-fish. This points out that the persistency parameter($\lambda$), an index of possibility for current volatility to sustain similarly in the future, was higher than 0.8-equivalently nearly to 1-in both flatfish and rock-fish, which presents volatility clustering. Also, this study estimated and compared and model that hypothesized normal distributions in order to determine fitness of respective models. As a result, the fitness of GARCH(1, 1)-t model was better than model where the distribution of error term was hypothesized through-distribution due to characteristics of fat-tailed distribution, was also better than model, as described in the results of basic statistic analysis. In conclusion, this study has an important mean in that it was introduced firstly in Korea to investigate in price volatility of Korean aquacultural fishery products, although there was partially a limited of official statistic data. Therefore, it is expected that the results of this study will be useful as a reference material for making and assessing governmental policies. Also, it is looked forward that the results will be helpful to build a fishery business plan as and aspect of producer, and also to take timely measures to potential price fluctuations of fishery products in market. Hence, it is advisable that further studies related to such price volatility in fishery market will extend and evolve into a wider variety of articles and issues in near future.
With increasing interest, there have been studies on LiDAR(Light Detection And Ranging)-based DEM(Digital Elevation Model) to acquire three dimensional topographic information. For producing LiDAR DEM with better accuracy, Filtering process is crucial, where only surface reflected LiDAR points are left to construct DEM while non-surface reflected LiDAR points need to be removed from the raw LiDAR data. In particular, the changes of input values for filtering algorithm-constructing parameters are supposed to produce different products. Therefore, this study is aimed to contribute to better understanding the effects of the changes of the levels of GroundFilter Algrothm's Mean parameter(GFmn) embedded in FUSION software on the accuracy of the LiDAR DEM products, using LiDAR data collected for Hwacheon, Yangju, Gyeongsan and Jangheung watershed experimental area. The effect of GFmn level changes on the products' accuracy is estimated by measuring and comparing the residuals between the elevations at the same locations of a field and different GFmn level-produced LiDAR DEM sample points. In order to test whether there are any differences among the five GFmn levels; 1, 3, 5, 7 and 9, One-way ANOVA is conducted. In result of One-way ANOVA test, it is found that the change in GFmn level significantly affects the accuracy (F-value: 4.915, p<0.01). After finding significance of the GFmn level effect, Tukey HSD test is also conducted as a Post hoc test for grouping levels by the significant differences. In result, GFmn levels are divided into two subsets ('7, 5, 9, 3' vs. '1'). From the observation of the residuals of each individual level, it is possible to say that LiDAR DEM is generated most accurately when GFmn is given as 7. Through this study, the most desirable parameter value can be suggested to produce filtered LiDAR DEM data which can provide the most accurate elevation information.
Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.
From $B\ddot{o}hm$-Vitense's atmospheric model calculations, the relations, [$T_e$, (B-V)] and [B.C, (B-V)] with respect to heavy element abundance were obtained. Using these relations and evolutionary model calculations of Rood, and Sweigart and Gross, analytic expressions for some physical parameters relating to the C-M diagrams of globular clusters were derived, and they were applied to 21 globular clusters with observed transition periods of RR Lyrae variables. More than 20 different parameters were examined for each globular cluster. The derived ranges of some basic parameters are as follows; $Y=0.21{\sim}0.33,\;Z=1.5{\times}10^{-4}{\sim}4.5{\times}10^{-3},\;age,\;t=9.5{\sim}19{\times}10^9$ years, mass for red giants, $m_{RG}=0.74m_{\odot}{\sim}0.91m_{\odot}$, mass for RR Lyrae stars, $m_{RR}=0.59m_{\odot}{\sim}0.75m_{\odot}$, the visual magnitude difference between the turnoff point and the horizontal branch (HB), ${\Delta}V_{to}=3.1{\sim}3.4(<{\Delta}V_{to}>=3.32)$, the color of the blue edge of RR Lyrae gap, $(B-V)_{BE}=0.17{\sim}0.21=(<(B-V)_{BE}>=0.18),\;[\frac{m}{L}]_{RR}=-1.7{\sim}-1.9$, mass difference of $m_{RR}$ relative to $m_{RG},(m_{RG}-m_{RR})/m_{RG}=0.0{\sim}0.39$. It was found that the ranges of derived parameters agree reasonably well with the observed ones and those estimated by others. Some important results obtained herein can be summarized as follows; (i) There are considerable variations in the initial helium abundance and in age of globular clusters. (ii) The radial gradient of heavy element abundance does exist for globular clusters as shown by Janes for field stars and open clusters. (iii) The helium abundance seems to have been increased with age by massive star evolution after a considerable amount (Y>0.2) of helium had been attained by the Big-Bang nucleosynthesis, but there is not seen a radial gradient of helium abundance. (iv) A considerable amount of heavy elements ($Z{\sim}10{-3}$) might have been formed in the inner halo ($r_{GC}$<10 kpc) from the earliest galactic co1lapse, and then the heavy element abundance has been slowly enriched towards the galactic center and disk, establishing the radial gradient of heavy element abundance. (v) The final galactic disk formation might have taken much longer by about a half of the galactic age than the halo formation, supporting a slow, inhomogeneous co1lapse model of Larson. (vi) Of the three principal parameters controlling the morphology of C-M diagrams, it was found that the first parameter is heavy clement abundance, the second age and the third helium abundance. (vii) The globular clusters can be divided into three different groups, AI, BI and CII according to Z, Y an d age as well as Dickens' HB types. BI group clusters of HB types 4 and 5 like M 3 and NGC 7006 are the oldest and have the lowest helium abundance of the three groups. And also they appear in the inner halo. On the other hand, the youngest AI clusters have the highest Z and Y, and appear in the innermost halo region and in the disk. (viii) From the result of the clean separations of the clusters into three groups, a three dimensional classification with three parameters, Z, Y and age is prsented. (ix) The anomalous C-M diagrams can be expalined in terms of the three principal parameters. That is, the anomaly of NGC 362 and NGC 7006 is accounted for by the smaller age of the order of $1{\sim}2{\times}10^9$ years rather than by the helium abundance difference, compared with M 3. (x) The difference in two Oosterhoff types I and II can be explained in terms of the mean mass difference of RR Lyrae variables rather than in terms of the helium abundance difference as suggested by Stobie. The mean mass of the variables in Oosterhoff type I clusters is smaller by $0.074m_{\odot}$ which is exactly consistent with Rood's estimate. Since it was found that the mean mass of RR Lyrae stars increases with decreasing Z, the two Oosterhoff types can be explained substantially by the metal abundance difference; the type II has Z<$3.4{\times}10^{-4}$, and the type I has higher Z than the type II.
Using an aggregator model, we look into the possibilities for substitution between Korea's exports, imports, domestic sales and domestic inputs (particularly labor), and substitution between disaggregated export and import components. Our approach heavily draws on an economy-wide GNP function that is similar to Samuelson's, modeling trade functions as derived from an integrated production system. Under the condition of homotheticity and weak separability, the GNP function would facilitate consistent aggregation that retains certain properties of the production structure. It would also be useful for a two-stage optimization process that enables us to obtain not only the net output price elasticities of the first-level aggregator functions, but also those of the second-level individual components of exports and imports. For the implementation of the model, we apply the Symmetric Generalized McFadden (SGM) function developed by Diewert and Wales to both stages of estimation. The first stage of the estimation procedure is to estimate the unit quantity equations of the second-level exports and imports that comprise four components each. The parameter estimates obtained in the first stage are utilized in the derivation of instrumental variables for the aggregate export and import prices being employed in the upper model. In the second stage, the net output supply equations derived from the GNP function are used in the estimation of the price elasticities of the first-level variables: exports, imports, domestic sales and labor. With these estimates in hand, we can come up with various elasticities of both the net output supply functions and the individual components of exports and imports. At the aggregate level (first-level), exports appear to be substitutable with domestic sales, while labor is complementary with imports. An increase in the price of exports would reduce the amount of the domestic sales supply, and a decrease in the wage rate would boost the demand for imports. On the other hand, labor and imports are complementary with exports and domestic sales in the input-output structure. At the disaggregate level (second-level), the price elasticities of the export and import components obtained indicate that both substitution and complement possibilities exist between them. Although these elasticities are interesting in their own right, they would be more usefully applied as inputs to the computational general equilibrium model.
This paper presents the methodology for construction of time-area curve via the width function and thereby rational estimation of time of concentration and storage coefficient of Clark model within the framework of method of moments. To this end time-area curve is built by rescaling the grid-based width function under the assumption of pure translation and then the analytical expressions for two parameters of Clark model are proposed in terms of method of moments. The methodology in this study based on the analytical expressions mentioned before is compared with both (1) the traditional optimization method of Clark model provided by HEC-1 in which the symmetric time-area curve is used and the difference between observed and simulated hydrographs is minimized (2) and the same optimization method but replacing time-area curve with rescaled width function in respect of peak discharge and time to peak of simulated direct runoff hydrographs and their efficiency coefficient relative to the observed ones. The following points are worth of emphasizing: (1) The optimization method by HEC-1 with rescaled width function among others results in the parameters well reflecting the observed runoff hydrograph with respect to peak discharge coordinates and coefficient of efficiency; (2) For the better application of Clark model it is recommended to use the time-area curve capable of accounting for irregular drainage structure of a river basin such as rescaled width function instead of symmetric time-area curve by HEC-1; (3) Moment-based methodology with rescaled width function developed in this study also gives rise to satisfactory simulation results in terms of peak discharge coordinates and coefficient of efficiency. Especially the mean velocities estimated from this method, characterizing the translation effect of time-area curve, are well consistent with the field surveying results for the points of interest in this study; (4) It is confirmed that the moment-based methodology could be an effective tool for quantitative assessment of translation and storage effects of natural river basin; (5) The runoff hydrographs simulated by the moment-based methodology tend to be more right skewed relative to the observed ones and have lower peaks. It is inferred that this is due to consideration of only one mean velocity in the parameter estimation. Further research is required to combine the hydrodynamic heterogeneity between hillslope and channel network into the construction of time-area curve.
Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.
shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
shows various market conditions captured by the two consumer heterogeneities.
(a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
(c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition.
summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.