• Title/Summary/Keyword: Size system

Search Result 13,420, Processing Time 0.048 seconds

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

An Analytical Study on Stem Growth of Chamaecyparis obtusa (편백(扁栢)의 수간성장(樹幹成長)에 관(關)한 해석적(解析的) 연구(硏究))

  • An, Jong Man;Lee, Kwang Nam
    • Journal of Korean Society of Forest Science
    • /
    • v.77 no.4
    • /
    • pp.429-444
    • /
    • 1988
  • Considering the recent trent toward the development of multiple-use of forest trees, investigations for comprehensive information on these young stands of Hinoki cypress are necessary for rational forest management. From this point of view, 83 sample trees were selected and cut down from 23-ear old stands of Hinoki cypress at Changsung-gun, Chonnam-do. Various stem growth factors of felled trees were measured and canonical correlaton analysis, principal component analysis and factor analysis were applied to investigate the stem growth characteristics, relationships among stem growth factors, and to get potential information and comprehensive information. The results are as follows ; Canonical correlation coefficient between stem volume and quality growth factor was 0.9877. Coefficient of canonical variates showed that DBH among diameter growth factors and height among height growth factors had important effects on stem volume. From the analysis of relationship between stem-volume and canonical variates, which were linearly combined DBH with height as one set, DBH had greater influence on volume growth than height. The 1st-2nd principal components here adopted to fit the effective value of 85% from the pincipal component analysis for 12 stem growth factors. The result showed that the 1st-2nd principal component had cumulative contribution rate of 88.10%. The 1st and the 2nd principal components were interpreted as "size factor" and "shape factor", respectively. From summed proportion of the efficient principal component fur each variate, information of variates except crown diameter, clear length and form height explained more than 87%. Two common factors were set by the eigen value obtained from SMC (squared multiple correlation) of diagonal elements of canonical matrix. There were 2 latent factors, $f_1$ and $f_2$. The former way interpreted as nature of diameter growth system. In inherent phenomenon of 12 growth factor, communalities except clear length and crown diameter had great explanatory poorer of 78.62-98.30%. Eighty three sample trees could he classified into 5 stem types as follows ; medium type within a radius of ${\pm}1$ standard deviation of factor scores, uniformity type in diameter and height growth in the 1st quadrant, slim type in the 2nd quadrant, dwarfish type in the 3rd quadrant, and fall-holed type in the 4 th quadrant.

  • PDF

IR Study on the Adsorption of Carbon Monoxide on Silica Supported Ruthenium-Nickel Alloy (실리카 지지 루테늄-니켈 합금에 있어서 일산화탄소의 흡착에 관한 IR 연구)

  • Park, Sang-Youn;Yoon, Dong-Wook
    • Applied Chemistry for Engineering
    • /
    • v.17 no.4
    • /
    • pp.349-356
    • /
    • 2006
  • We have investigated adsorption and desorption properties of CO adsorption on silica supported Ru/Ni alloys at various Ru/Ni mole content ratio as well as CO partial pressures using Fourier transform infrared spectrometer (FT-IR). For Ru-$SiO_{2}$ sample, four bands were observed at $2080.0cm^{-1}$, $2021.0{\sim}2030.7cm^{-1}$, $1778.9{\sim}1799.3cm^{-1}$, $1623.8cm^{-1}$ on adsorption and three bands were observed at $2138.7cm^{-1}$, $2069.3cm^{-1}$, $1988.3{\sim}2030.7cm^{-1}$ on vacumn desorption. For Ni-$SiO_{2}$ sample, four bands were observed at $2057.7cm^{-1}$, $2019.1{\sim}2040.3cm^{-1}$, $1862.9{\sim}1868.7cm^{-1}$, $1625.7cm^{-1}$ on adsorption and two bands were observed at $2009.5{\sim}2040.3cm^{-1}$, $1828.4{\sim}1868.7cm^{-1}$ on vacumn desorption. These absorption bands correspond with those of the previous reports approximately. For Ru/Ni(9/1, 8/2, 7/3, 6/4, 5/5; mole content ratio)-$SiO_{2}$ samples, three bands were observed at $2001.8{\sim}2057.7cm^{-1}$, $1812.8{\sim}1926.5cm^{-1}$, $1623.8{\sim}1625.7cm^{-1}$ on adsorption and three bands were observed at $2140.6cm^{-1}$, $2073.1cm^{-1}$, $1969.0{\sim}2057.7cm^{-1}$ on vacumn desorption. The spectrum pattern observed for Ru/Ni-$SiO_{2}$ sample at 9/1 Ru/Ni mole content ratio on CO adsorption and on vacumn desorption is almost like the spectrum pattern observed for Ru-$SiO_{2}$ sample. But the spectrum patterns observed for Ru/Ni-$SiO_{2}$ samples under 8/2 Ru/Ni mole content ratio on CO adsorption and vacumn desorption are almost like the pattern observed for $Ni-SiO_{2}$ sample. It may be suggested surfaces of alloy clusters on the Ru/Ni-$SiO_{2}$ samples contain more Ni components than the mole content ratio of the sample considering the above phenomena. With Ru/Ni-$SiO_{2}$ samples the absorption band shifts may be ascribed to variations of surface concentration, strain variation due to atomic size difference, variation of bonding energy and electronic densities, and changes of surface geometries according to surface concentration variation. Studies for CO adsorption on Ru/Ni alloy cluster surface by LEED and Auger spectroscopy, interation between Ru/Ni alloy cluster and $SiO_{2}$, and MO calculation for the system would be needed to look into the phenomena.

Effects of Activation Regimens of Recipient Cytoplasm, Culture Condition of Donor Embryos and Size of Blastomeres on Development of Reconstituted Bovine Embryos (수핵 난자의 활성화 방법과 공핵 수정란의 배양체계 및 할구의 크기가 소 핵이식 수정란의 발달에 미치는 영향)

  • 심보웅;조성근;이효종;박충생;최상용
    • Korean Journal of Animal Reproduction
    • /
    • v.22 no.4
    • /
    • pp.425-435
    • /
    • 1998
  • To improve the efficiency of nuclear transplantation in bovine, in this study the development in vitro of nuclear transferred (NT) embryos was compared by different activation regimens of the enucleated oocytes. The effect of developmental stage and culture system of donor nuclei on fusion and development in vitro of NT embryos were also evaluated. Oocytes were collected from Hanwoo ovaries obtained from slaughterhouse and matured in Ham's F-10 supplemented with hormones. After 20~22 h maturation, the oocytes were vortexed to be free from cumulus cells and subsequently their nucleus and the first polar body were removed. Enucleated oocytes were divided into 3 groups for activation; the oocytes of group I were activated with ionomycin for 5 min and subsequently incubated in 6-dimetylarninopurine (DMAP) for 4 h, Those of group II were treated with DMAP for 4 h at 39 h after onset of in vitro maturation (IVM) and those of group III were kept in room temperature ($25^{\circ}C$) for 3 h at 39 h after onset of IVM. After in vitro fertilization (IVF) the embryos for muclear donor were cultured either by group culture (20 embryos /50 ${mu}ell$ drop) or individually (1 embryo /50 ${mu}ell$ drop) for 4 day and 5 day. At day 4 and 5 after IVF, blastomeres were separated in calcium-magnesium free medium, and then classified into small (day 5: $\leq$ 38 ${\mu}{\textrm}{m}$, day 4: $\leq$ 46 ${\mu}{\textrm}{m}$) and large (day 5 : $\geq$ 38 ${\mu}{\textrm}{m}$, day 4 ; $\geq$ 46 ${\mu}{\textrm}{m}$). The separated blastomeres were replaced into enucleated and activated recipient cytoplasm. The blastomere-oocyte complexes were fused by electrically. The NT embryos were cultured in TCM-199 containing 10% FCS in 39$^{\circ}C$, 5% $CO_2$ incubator for 7 day. The results obtained were summarized as follows; There were no differences in fusion and development to blastocyst between groups as group I (68%, 10%), group II (75%, 14%) and group III (73%, 9%), respectively. However, the cell number in blastocyst of NT embryos in group III were significantly fewer than in the other groups (P<0.05). No differences in fusion and development to blastocyst were found between individual or group cultured and between small or large blastomeres of day 4 and day 5 donor embryos. From these results, it was concluded that the combination of ionomycin and DMAP, or treatment of DMAP at 39 h after onset of IVM were useful for the efficient of production of NT bovine embryos, and the individual cultured embryos could be simply used as donor nuclei for NT bovine embryo.

  • PDF

Histological and Biochemical Studies on the Rooting of Hard-wood Cuttings in Mulberry (Morus species) (뽕나무 古條揷木의 發根에 關한 組織 및 生化學的 硏究)

  • Lim, Su-Ho
    • Journal of Sericultural and Entomological Science
    • /
    • v.23 no.1
    • /
    • pp.1-31
    • /
    • 1981
  • Rootability of the hardwood cuttings of mulberry was related not only histological characteristics but dependent on biochemical properties. In this connection, the characteristics of the hardwood cuttings were histologically observed and the growth substances produced by the cuttings were also identified by means of mung bean bioassay. Amino acid, carbohydrate, nucleic acid contents, and the C/N ratio were also analysed. The results are summarized as follows. 1. There were differences in rootability of cuttings between mulberry species and varieties Among the three mulberry species tested, Morus Lhou Koidz. showed the highest rootability while M. bombycis showed the lowest one. In varietal differences in rootability, it was shown that the varieties could be grouped according to rootability: high varieties(above 80%), medium(41~79%), and low(below 40%). The higher varieties were Kemmochi, Nakamaki, Kosen, and Wusuba roso. 2. The histological characteristic of the hardwood cuttings most closely related to rootability was cell layer arrangement in the sclerenchyma tissue. The lower rootability varieties developed two or three overlapping cell layers in the bark tissue and in the higher rootability varieties they were scattered over the primary cortex. 3. In the higher rootability varieties, there was a positive correlation between the development of root primodia and rootability of the hardwood cuttings. It was also shown that there was a close relationship between the size of primodia and the surface area of the lenticel with rootability of the cuttings. 4. Effect of growth substances extracted from the hardwood cuttings were determined by mung bean bioassay. The higher rootability varieties usually showed higher activities of the growth substances, in contrast the lower rootability varieties showed higher activities of the inhibitory substances. 5. It was evident that the substance separated by paper chromatography was identified as indole acetic acid with $R_f$ value ranging from 0.3 to 0.5. The other substances detected at a $R_f$ value ranging from 0.8 to 1.0 and origin to 0.1 were also responsible for rooting. 6. There exists a quantitatively different distribution of growth substances in a synergistic system in the tissues of cuttings, and the balance between growth and inhibitory substances gives rise to the development of rooting. Particularly, no descent of the substances from winter buds resulted in no rooting of cuttings but these substances were produced a week after planting in a warm environment. 7. It was shown that there were positive correlations between carbohydrate ($r=0.72^*$) and total sugar ($r=0.67^*$) and rootability, respectively, but there were negative correlations between reducing sugars ($r=-0.75^*$) and rootability. 8. High C/N ratio gave rise to high rootability($r=0.67^*$). The latter therefore depended on high amount of carbohydrate rather than nitrogen in the cuttings. 9. The content of RNA and DNA in the cuttings was not changed for upto two weeks after the cuttings were planted. Then an increase in RNA content took place in only the high rootability varieties. 10. There were quantitative and qualitative differences in the compositions of the amino acids between the high rootability varieties and the low rootability varieties. More aspartic acid and cystine were found in the higher rootability varieties than in the low rootability varieties.

  • PDF

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.