• Title/Summary/Keyword: 초기모형

Search Result 1,304, Processing Time 0.025 seconds

A Study on the Status of Startups and Their Nurturing Plans: Focusing on Startups in Seongnam City (스타트업 실태 및 육성방안에 관한 연구: 성남시 스타트업을 중심으로)

  • Han, Kyu-Dong;Jeon, Byung-Hoon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.5
    • /
    • pp.67-80
    • /
    • 2022
  • This study was conducted to derive policy measures such as fostering and supporting by examining the actual conditions of domestic startups. The subject of this study was the start-ups located in Seongnam-si, where Pangyo Techno Valley, which is the highest-level innovation cluster in Korea and is evaluated as a start-up mecca. Startups were defined as startups under 7 years old based on new technologies such as IT, BT, and CT, and the subjects of the study were selected. This can be seen as a step forward from previous research in that it embodies the concept of a startup that was previously abstract in a quantitatively measurable way. As a result of the analysis, about 94% of startups are distributed in the so-called "Death Valley" growth stage, and startups above scale-up, which means full-scale growth beyond BEP, account for about 6%. appeared to be occupied. He cited the problem of start-up funds as the biggest difficulty in the early stages of startups, and cited the loan evaluation method that prioritizes sales or collateral in raising funds as the biggest problem. In addition, start-ups rated the access to private investment capital such as VC, AC, and angel investors at a low level compared to policy funds, which are public funds. Most startups showed a lot of interest in overseas expansion, and they chose matching overseas investors such as overseas VCs as the biggest support for overseas expansion. The overall competitiveness in the overseas market was 49.6 points, which is less than 50 points out of 100, indicating that the overall competitiveness was somewhat inferior. It was analyzed that public support and investment in overseas sales channels (sales channels, distribution networks, etc.) should be prioritized along with enhancement of technological competitiveness in order for domestic startups to increase their competitiveness in overseas markets as well as in the domestic market.

Decomposition Characteristics of Fungicides(Benomyl) using a Design of Experiment(DOE) in an E-beam Process and Acute Toxicity Assessment (전자빔 공정에서 실험계획법을 이용한 살균제 Benomyl의 제거특성 및 독성평가)

  • Yu, Seung-Ho;Cho, Il-Hyoung;Chang, Soon-Woong;Lee, Si-Jin;Chun, Suk-Young;Kim, Han-Lae
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.9
    • /
    • pp.955-960
    • /
    • 2008
  • We investigated and estimated at the characteristics of decomposition and mineralization of benomyl using a design of experiment(DOE) based on the general factorial design in an E-beam process, and also the main factors(variables) with benomyl concentration(X$_1$) and E-beam irradiation(X$_2$) which consisted of 5 levels in each factor was set up to estimate the prediction model and the optimization conditions. At frist, the benomyl in all treatment combinations except 17 and 18 trials was almost degraded and the difference in the decomposition of benomyl in the 3 blocks was not significant(p > 0.05, one-way ANOVA). However, the % of benomyl mineralization was 46%(block 1), 36.7%(block 2) and 22%(block 3) and showed the significant difference of the % that between each block(p < 0.05). The linear regression equations of benomyl mineralization in each block were also estimated as followed; block 1(Y$_1$ = 0.024X$_1$ + 34.1(R$^2$ = 0.929)), block 2(Y$_2$ = 0.026X$_2$ + 23.1(R$^2$ = 0.976)) and block 3(Y$_3$ = 0.034X$_3$ + 6.2(R$^2$ = 0.98)). The normality of benomyl mineralization obtained from Anderson-Darling test in all treatment conditions was satisfied(p > 0.05). The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were Y = 39.96 - 9.36X$_1$ + 0.03X$_2$ - 10.67X$_1{^2}$ - 0.001X$_2{^2}$ + 0.011X$_1$X$_2$(R$^2$ = 96.3%, Adjusted R$^2$ = 94.8%) and 57.3% at 0.55 mg/L and 950 Gy, respectively. A Microtox test using V. fischeri showed that the toxicity, expressed as the inhibition(%), was reduced almost completely after an E-beam irradiation, whereas the inhibition(%) for 0.5 mg/L, 1 mg/L and 1.5 mg/L was 10.25%, 20.14% and 26.2% in the initial reactions in the absence of an E-beam illumination.

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.