• Title/Summary/Keyword: input parameter

Search Result 1,636, Processing Time 0.036 seconds

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

Exploring Ways to Improve the Predictability of Flowering Time and Potential Yield of Soybean in the Crop Model Simulation (작물모형의 생물계절 및 잠재수량 예측력 개선 방법 탐색: I. 유전 모수 정보 향상으로 콩의 개화시기 및 잠재수량 예측력 향상이 가능한가?)

  • Chung, Uran;Shin, Pyeong;Seo, Myung-Chul
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.4
    • /
    • pp.203-214
    • /
    • 2017
  • There are two references of genetic information in Korean soybean cultivar. This study suggested that the new seven genetic information to supplement the uncertainty on prediction of potential yield of two references in soybean, and assessed the availability of two references and seven genetic information for future research. We carried out evaluate the prediction on flowering time and potential yield of the two references of genetic parameters and the new seven genetic parameters (New1~New7); the new seven genetic parameters were calibrated in Jinju, Suwon, Chuncheon during 2003-2006. As a result, in the individual and regional combination genetic parameters, the statistical indicators of the genetic parameters of the each site or the genetic parameters of the participating stations showed improved results, but did not significant. In Daegu, Miryang, and Jeonju, the predictability on flowering time of genetic parameters of New7 was not improved than that of two references. However, the genetic parameters of New7 showed improvement of predictability on potential yield. No predictability on flowering time of genetic parameters of two references as having the coefficient of determination ($R^2$) on flowering time respectively, at 0.00 and 0.01, but the predictability of genetic parameter of New7 was improved as $R^2$ on flowering time of New7 was 0.31 in Miryang. On the other hand, $R^2$ on potential yield of genetic parameters of two references were respectively 0.66 and 0.41, but no predictability on potential yield of genetic parameter of New7 as $R^2$ of New7 showed 0.00 in Jeonju. However, it is expected that the regional combination genetic parameters with the good evaluation can be utilized to predict the flowering timing and potential yields of other regions. Although it is necessary to analyze further whether or not the input data is uncertain.

The Study on the Embedded Active Device for Ka-Band using the Component Embedding Process (부품 내장 공정을 이용한 5G용 내장형 능동소자에 관한 연구)

  • Jung, Jae-Woong;Park, Se-Hoon;Ryu, Jong-In
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.28 no.3
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, by embedding a bare-die chip-type drive amplifier into the PCB composed of ABF and FR-4, it implements an embedded active device that can be applied in 28 GHz band modules. The ABF has a dielectric constant of 3.2 and a dielectric loss of 0.016. The FR-4 where the drive amplifier is embedded has a dielectric constant of 3.5 and a dielectric loss of 0.02. The proposed embedded module is processed into two structures, and S-parameter properties are confirmed with measurements. The two process structures are an embedding structure of face-up and an embedding structure of face-down. The fabricated module is measured on a designed test board using Taconic's TLY-5A(dielectric constant : 2.17, dielectric loss : 0.0002). The PCB which embedded into the face-down expected better gain performance due to shorter interconnection-line from the RF pad of the Bear-die chip to the pattern of formed layer. But it is verified that the ground at the bottom of the bear-die chip is grounded Through via, resulting in an oscillation. On the other hand, the face-up structure has a stable gain characteristic of more than 10 dB from 25 GHz to 30 GHz, with a gain of 12.32 dB at the center frequency of 28 GHz. The output characteristics of module embedded into the face-up structure are measured using signal generator and spectrum analyzer. When the input power (Pin) of the signal generator was applied from -10 dBm to 20 dBm, the gain compression point (P1dB) of the embedded module was 20.38 dB. Ultimately, the bare-die chip used in this paper was verified through measurement that the oscillation is improved according to the grounding methods when embedding in a PCB. Thus, the module embedded into the face-up structure will be able to be properly used for communication modules in millimeter wave bands.

The Accuracy Evaluation of Digital Elevation Models for Forest Areas Produced Under Different Filtering Conditions of Airborne LiDAR Raw Data (항공 LiDAR 원자료 필터링 조건에 따른 산림지역 수치표고모형 정확도 평가)

  • Cho, Seungwan;Choi, Hyung Tae;Park, Joowon
    • Journal of agriculture & life science
    • /
    • v.50 no.3
    • /
    • pp.1-11
    • /
    • 2016
  • With increasing interest, there have been studies on LiDAR(Light Detection And Ranging)-based DEM(Digital Elevation Model) to acquire three dimensional topographic information. For producing LiDAR DEM with better accuracy, Filtering process is crucial, where only surface reflected LiDAR points are left to construct DEM while non-surface reflected LiDAR points need to be removed from the raw LiDAR data. In particular, the changes of input values for filtering algorithm-constructing parameters are supposed to produce different products. Therefore, this study is aimed to contribute to better understanding the effects of the changes of the levels of GroundFilter Algrothm's Mean parameter(GFmn) embedded in FUSION software on the accuracy of the LiDAR DEM products, using LiDAR data collected for Hwacheon, Yangju, Gyeongsan and Jangheung watershed experimental area. The effect of GFmn level changes on the products' accuracy is estimated by measuring and comparing the residuals between the elevations at the same locations of a field and different GFmn level-produced LiDAR DEM sample points. In order to test whether there are any differences among the five GFmn levels; 1, 3, 5, 7 and 9, One-way ANOVA is conducted. In result of One-way ANOVA test, it is found that the change in GFmn level significantly affects the accuracy (F-value: 4.915, p<0.01). After finding significance of the GFmn level effect, Tukey HSD test is also conducted as a Post hoc test for grouping levels by the significant differences. In result, GFmn levels are divided into two subsets ('7, 5, 9, 3' vs. '1'). From the observation of the residuals of each individual level, it is possible to say that LiDAR DEM is generated most accurately when GFmn is given as 7. Through this study, the most desirable parameter value can be suggested to produce filtered LiDAR DEM data which can provide the most accurate elevation information.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

A Study on the Relationship of Learning, Innovation Capability and Innovation Outcome (학습, 혁신역량과 혁신성과 간의 관계에 관한 연구)

  • Kim, Kui-Won
    • Journal of Korea Technology Innovation Society
    • /
    • v.17 no.2
    • /
    • pp.380-420
    • /
    • 2014
  • We increasingly see the importance of employees acquiring enough expert capability or innovation capability to prepare for ever growing uncertainties in their operation domains. However, despite the above circumstances, there have not been an enough number of researches on how operational input components for employees' innovation outcome, innovation activities such as acquisition, exercise and promotion effort of employee's innovation capability, and their resulting innovation outcome interact with each other. This trend is believed to have been resulted because most of the current researches on innovation focus on the units of country, industry and corporate entity levels but not on an individual corporation's innovation input components, innovation outcome and innovation activities themselves. Therefore, this study intends to avoid the currently prevalent study frames and views on innovation and focus more on the strategic policies required for the enhancement of an organization's innovation capabilities by quantitatively analyzing employees' innovation outcomes and their most suggested relevant innovation activities. The research model that this study deploys offers both linear and structural model on the trio of learning, innovation capability and innovation outcome, and then suggests the 4 relevant hypotheses which are quantitatively tested and analyzed as follows: Hypothesis 1] The different levels of innovation capability produce different innovation outcomes (accepted, p-value = 0.000<0.05). Hypothesis 2] The different amounts of learning time produce different innovation capabilities (rejected, p-value = 0.199, 0.220>0.05). Hypothesis 3] The different amounts of learning time produce different innovation outcomes. (accepted, p-value = 0.000<0.05). Hypothesis 4] the innovation capability acts as a significant parameter in the relationship of the amount of learning time and innovation outcome (structural modeling test). This structural model after the t-tests on Hypotheses 1 through 4 proves that irregular on-the-job training and e-learning directly affects the learning time factor while job experience level, employment period and capability level measurement also directly impacts on the innovation capability factor. Also this hypothesis gets further supported by the fact that the patent time absolutely and directly affects the innovation capability factor rather than the learning time factor. Through the 4 hypotheses, this study proposes as measures to maximize an organization's innovation outcome. firstly, frequent irregular on-the-job training that is based on an e-learning system, secondly, efficient innovation management of employment period, job skill levels, etc through active sponsorship and energization community of practice (CoP) as a form of irregular learning, and thirdly a model of Yί=f(e, i, s, t, w)+${\varepsilon}$ as an innovation outcome function that is soundly based on a smart system of capability level measurement. The innovation outcome function is what this study considers the most appropriate and important reference model.

The Study of New Reconstruction Method for Brain SPECT on Dual Detector System (Dual detector system에서 Brain SPECT의 new reconstruction method의 연구)

  • Lee, Hyung-Jin;Kim, Su-Mi;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.57-62
    • /
    • 2009
  • Purpose: Brain SPECT study is more sensitive to motion than other studies. Especially, when applying 1-day subtraction method for Diamox SPECT, it needs shorter study time in order to prevent reexamination. We were required to have new study condition and analysing method on dual detector system because triple head camera in Seoul National University Hospital is to be disposed. So we have tried to increase image quality and make the dual and triple head to have equivalent study time by using a new analysing program. Materials and Methods: Using IEC phantom, we estimated contrast, SNR and FWHM. In Hoffman 3D brain phantom which is similar with real brain, we were on the supposition that 5% of injected doses were distributed in brain tissue. To compare with existing FBP method, we used fan-beam collimator. And we applied 15 sec, 25 sec/frame for each SEPCT studies using LEHR and LEUHR. We used OSEM2D and Onco-flash3D reconstruction method and compared reconstruction methods between applied Gaussian post-filtering 5mm and not applied as well. Attenuation correction was applied by manual method. And we did Brain SPECT to patient injected 15 mCi of $^{99m}Tc$-HMPAO according to results of Phantom study. Lastly, technologist, MD, PhD estimated the results. Results: The study shows that reconstruction method by Flash3D is better than exiting FBP and OSEM2D when studied using IEC phantom. Flowing by estimation, when using Flash3D, both of 15 sec and 25 sec are needed postfiltering 5 mm. And 8 times are proper for subset 8 iteration in Flash3D. OSEM2D needs post-filtering. And it is proper that subset 4, iteration 8 times for 15sec and subset 8, iteration 12 times for 25sec. The study regarding to injected doses for a patient and study time, combination of input parameter-15 sec/frame, LEHR collimator, analysing program-Flash3D, subset 8, iteration 8times and Gaussian post-filtering 5mm is the most appropriate. On the other hands, it was not appropriate to apply LEUHR collimator to 1-day subtraction method of Diamox study because of lower sensitivity. Conclusions: We could prove that there was also an advantage of short study time effectiveness in Dual camera same as Triple gamma camera and get great result of alternation from existing fan-beam collimator to parallel collimator. In addition, resolution and contrast of new method was better than FBP method. And it could improve sensitivity and accuracy of image because lesser subjectivity was input than Metz filter of FBP. We expect better image quality and shorter study time of Brain SPECT on Dual detector system.

  • PDF

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Ore Minerals, Fluid Inclusions, and Isotopic(S.C.O) Compositions in the Diatreme-Hosted Nokdong As-Zn Deposit, Southeastern Korea: The Character and Evolution of the Hydrothermal Fluids (다이아튜림 내에 부존한 녹동 비소-아연광상의 광석광물, 유체포유물, 유황-탄소-산소 동위원소 : 광화용액의 특성과 진화)

  • Park, Ki-Hwa;Park, Hee-In;Eastoe, Christopher J.;Choi, Suck-Won
    • Economic and Environmental Geology
    • /
    • v.24 no.2
    • /
    • pp.131-150
    • /
    • 1991
  • The Weolseong diatreme was temporally and spatially related to the intrusion of the Gadaeri granite, and was -mineralized by meteoric aqueous fluids. In the Nokdong As-Zn deposit, pyrite, aresenopyrite and sphalerite are the most abundant sulfide minerals. They are associated with minor amount of magnetite, pyrrhotite, chalcopyrite and cassiterite, and trace amounts of Pb-Sb-Bi-Ag sulphosalts. The AsZn ore probably occurred at about $350^{\circ}C$ according to fluid inclusion and compositional data estimated from the arsenic content of arsenopyrite and iron content of sphalerite intergrown with pyrrhotite + chalcopyrite + cubanite. Heating studies of fluid inclusions in quartz indicate a temperature range between 180 and $360^{\circ}C$, and freezing data indicate a salinity range from 0.8 to 4.1 eq.wt % NaCl. The coexisting assemblage pyrite + pyrrhotite + arsenopyrite suggests that $H_2S$ was the dominate reduced sulfur species, and defines fluid parameter thus: $10^{-34.5}$ < ${\alpha}_{S_2}$ < $10^{-33}$, $10^{-11}$ < $f_{S_2}$ < $10^{-8}$, -2.4 < ${\alpha}_{S_2}$ < -1.6 atm and pH= 5.2 (sericte stable) at $300^{\circ}C$. The sulfur isotope values ranged from 1.8 to 5.5% and indicate that the sulfur in the sulfides is of magmatic in origin. The carbon isotope values range from -7.8 to -11.6%, and the oxygen isotope values from the carbonates in mineralized wall rock range from 2 to 11.4%. The oxygen isotope compositions of water coexisting with calcite require an input of meteoric water. The geochemical data indicate that the ore-forming fluid probably was generated by a variety of mechanisms, including deep circulation of meteoric water driven by magmatic heat, with possible input of magniatic water and ore component.

  • PDF

A Sensitivity Analysis of JULES Land Surface Model for Two Major Ecosystems in Korea: Influence of Biophysical Parameters on the Simulation of Gross Primary Productivity and Ecosystem Respiration (한국의 두 주요 생태계에 대한 JULES 지면 모형의 민감도 분석: 일차생산량과 생태계 호흡의 모사에 미치는 생물리모수의 영향)

  • Jang, Ji-Hyeon;Hong, Jin-Kyu;Byun, Young-Hwa;Kwon, Hyo-Jung;Chae, Nam-Yi;Lim, Jong-Hwan;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.107-121
    • /
    • 2010
  • We conducted a sensitivity test of Joint UK Land Environment Simulator (JULES), in which the influence of biophysical parameters on the simulation of gross primary productivity (GPP) and ecosystem respiration (RE) was investigated for two typical ecosystems in Korea. For this test, we employed the whole-year observation of eddy-covariance fluxes measured in 2006 at two KoFlux sites: (1) a deciduous forest in complex terrain in Gwangneung and (2) a farmland with heterogeneous mosaic patches in Haenam. Our analysis showed that the simulated GPP was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration for both ecosystems. RE was sensitive to wood biomass parameter for the deciduous forest in Gwangneung. For the mixed farmland in Haenam, however, RE was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration like the simulated GPP. For both sites, the JULES model overestimated both GPP and RE when the default values of input parameters were adopted. Considering the fact that the leaf nitrogen concentration observed at the deciduous forest site was only about 60% of its default value, the significant portion of the model's overestimation can be attributed to such a discrepancy in the input parameters. Our finding demonstrates that the abovementioned key biophysical parameters of the two ecosystems should be evaluated carefully prior to any simulation and interpretation of ecosystem carbon exchange in Korea.