• Title/Summary/Keyword: Volume Data

Search Result 5,179, Processing Time 0.04 seconds

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

NOx Reduction Characteristics of Ship Power Generator Engine SCR Catalysts according to Cell Density Difference (선박 발전기관용 SCR 촉매의 셀 밀도차에 따른 NOx 저감 특성)

  • Kyung-Sun Lim;Myeong-Hwan Im
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1209-1215
    • /
    • 2022
  • The selective catalytic reduction (SCR) is known as a very efficient method to reduce nitrogen oxides (NOx) and the catalyst performs reduction from nitrogen oxides (NOx) to nitrogen (N2) and water vapor (H2O). The catalyst, which is one of the factors determining the performance of the nitrogen oxide (NOx) ruduction method, is known to increase catalyst efficiency as cell density increases. In this study, the reduction characteristics of nitrogen oxides (NOx) under various engine loads investigated. A 100CPSI(60Cell) catalysts was studied through a laboratory-sized simulating device that can simulate the exhaust gas conditions from the power generation engine installed in the training ship SEGERO. The effect of 100CPSI(60Cell) cell density was compared with that of 25.8CPSI(30Cell) cell density that already had NOx reduction data from the SCR manufacturing. The experimental catalysts were honeycomb type and its compositions and materials of V2O5-WO3-TiO2 were retained, with only change on cell density. As a result, the NOx concentration reduction rate from 100CPSI(60Cell) catalyst was 88.5%, and IMO specific NOx emission was 0.99g/kwh satisfying the IMO Tier III NOx emission requirement. The NOx concentration reduction rate from 25.8CPSI(30Cell) was 78%, and IMO specific NOx emission was 2.00g/kwh. Comparing the NOx concentration reduction rate and emission of 100CPSI(60Cell) and 25.8CPSI(30Cell) catalysts, notably, the NOx concentration reduction rate of 100CPSI(60Cell) catalyst was 10.5% higher and its IMO specific NOx emission was about twice less than that of the 25.8CPSI(30Cell) catalysts. Therefore, an efficient NOx reduction effect can be expected by increasing the cell density of catalysts. In other words, effects to production cost reduction, efficient arrangement of engine room and cargo space can be estimated from the reduced catalyst volume.

Surgical Treatment of Osteoporotic Vertebral Compression Fractures at Thoraco-Lumbar Levels: Only Pedicle Screw Constructs with Polymethylmethacrylate Augmentation (흉요추부 골다공증성 척추 압박 골절의 수술적 치료: 골시멘트 보강술을 이용한 척추경 나사 고정)

  • Jun, Deuk Soo;Baik, Jong-Min;Park, Ji Hyeon
    • Journal of the Korean Orthopaedic Association
    • /
    • v.54 no.4
    • /
    • pp.327-335
    • /
    • 2019
  • Purpose: To investigate the radiological efficacy of polymethylmethacrylate (PMMA) augmentation of pedicle screw operation in osteoporotic vertebral compression fractures (OVCF) patients. Materials and Methods: Twenty OVCF patients, who underwent only posterior fusion using pedicle screws with PMMA augmentation, were included in the study. The mean follow-up period was 15.6 months. The demographic data, bone mineral density (BMD), fusion segments, number of pedicle screws, and amount of PMMA were reviewed as medical records. To analyze the radiological outcomes, the radiologic parameters were measured as the time serial follow-up (preoperation, immediately postoperation, postoperation 6 weeks, 3, 6 months, and 1 year follow-up). Results: A total of 20 patients were examined (16 females [80.0%]; mean age, 69.1±8.9 years). The average BMD was -2.5±0.9 g/cm2. The average cement volume per vertebral body was 6.3 ml. The mean preoperative Cobb angle of focal kyphosis was 32.7°±7.0° and was improved significantly to 8.7°±6.9° postoperatively (p<0.001), with maintenance of the correction at the serial follow-up, postoperatively. The Cobb angle of instrumented kyphosis, wedge angle, and sagittal index showed similar patterns. In addition, the anterior part of fractured vertebral body height averaged 11.0±5.0 mm and was improved to 18.5±5.7 mm postoperatively (p=0.006), with maintenance of the improvement at the 3-month, 6-month, and 1-year follow-up. Conclusion: The reinforcement of pedicle screws using PMMA augmentation may be a feasible surgical technique for OVCF. Moreover, it appears to be appropriate for improving the focal thoracolumbar/lumbar kyphosis and is maintained well after surgery.

The Correction Effect of Motion Artifacts in PET/CT Image using System (PET/CT 검사 시 움직임 보정 기법의 유용성 평가)

  • Yeong-Hak Jo;Se-Jong Yoo;Seok-Hwan Bae;Jong-Ryul Seon;Seong-Ho Kim;Won-Jeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.45-52
    • /
    • 2024
  • In this study, an AI-based algorithm was developed to prevent image quality deterioration and reading errors due to patient movement in PET/CT examinations that use radioisotopes in medical institutions to test cancer and other diseases. Using the Mothion Free software developed using, we checked the degree of correction of movement due to breathing, evaluated its usefulness, and conducted a study for clinical application. The experimental method was to use an RPM Phantom to inject the radioisotope 18F-FDG into a vacuum vial and a sphere of a NEMA IEC body Phantom of different sizes, and to produce images by directing the movement of the radioisotope into a moving lesion during respiration. The vacuum vial had different degrees of movement at different positions, and the spheres of the NEMA IEC body Phantom of different sizes produced different sizes of lesions. Through the acquired images, the lesion volume, maximum SUV, and average SUV were each measured to quantitatively evaluate the degree of motion correction by Motion Free. The average SUV of vacuum vial A, with a large degree of movement, was reduced by 23.36 %, and the error rate of vacuum vial B, with a small degree of movement, was reduced by 29.3 %. The average SUV error rate at the sphere 37mm and 22mm of the NEMA IEC body Phantom was reduced by 29.3 % and 26.51 %, respectively. The average error rate of the four measurements from which the error rate was calculated decreased by 30.03 %, indicating a more accurate average SUV value. In this study, only two-dimensional movements could be produced, so in order to obtain more accurate data, a Phantom that can embody the actual breathing movement of the human body was used, and if the diversity of the range of movement was configured, a more accurate evaluation of usability could be made.

The Relations between Financial Constraints and Dividend Smoothing of Innovative Small and Medium Sized Enterprises (혁신형 중소기업의 재무적 제약과 배당스무딩간의 관계)

  • Shin, Min-Shik;Kim, Soo-Eun
    • Korean small business review
    • /
    • v.31 no.4
    • /
    • pp.67-93
    • /
    • 2009
  • The purpose of this paper is to explore the relations between financial constraints and dividend smoothing of innovative small and medium sized enterprises(SMEs) listed on Korea Securities Market and Kosdaq Market of Korea Exchange. The innovative SMEs is defined as the firms with high level of R&D intensity which is measured by (R&D investment/total sales) ratio, according to Chauvin and Hirschey (1993). The R&D investment plays an important role as the innovative driver that can increase the future growth opportunity and profitability of the firms. Therefore, the R&D investment have large, positive, and consistent influences on the market value of the firm. In this point of view, we expect that the innovative SMEs can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. And also, we expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Aivazian et al.(2006) exert that the financial unconstrained firms with the high accessibility to capital market can adjust dividend payment faster than the financial constrained firms. We collect the sample firms among the total SMEs listed on Korea Securities Market and Kosdaq Market of Korea Exchange during the periods from January 1999 to December 2007 from the KIS Value Library database. The total number of firm-year observations of the total sample firms throughout the entire period is 5,544, the number of firm-year observations of the dividend firms is 2,919, and the number of firm-year observations of the non-dividend firms is 2,625. About 53%(or 2,919) of these total 5,544 observations involve firms that make a dividend payment. The dividend firms are divided into two groups according to the R&D intensity, such as the innovative SMEs with larger than median of R&D intensity and the noninnovative SMEs with smaller than median of R&D intensity. The number of firm-year observations of the innovative SMEs is 1,506, and the number of firm-year observations of the noninnovative SMEs is 1,413. Furthermore, the innovative SMEs are divided into two groups according to level of financial constraints, such as the financial unconstrained firms and the financial constrained firms. The number of firm-year observations of the former is 894, and the number of firm-year observations of the latter is 612. Although all available firm-year observations of the dividend firms are collected, deletions are made in the case of financial industries such as banks, securities company, insurance company, and other financial services company, because their capital structure and business style are widely different from the general manufacturing firms. The stock repurchase was involved in dividend payment because Grullon and Michaely (2002) examined the substitution hypothesis between dividends and stock repurchases. However, our data structure is an unbalanced panel data since there is no requirement that the firm-year observations data are all available for each firms during the entire periods from January 1999 to December 2007 from the KIS Value Library database. We firstly estimate the classic Lintner(1956) dividend adjustment model, where the decision to smooth dividend or to adopt a residual dividend policy depends on financial constraints measured by market accessibility. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between current payout rato and target payout ratio each year. In the Lintner model, dependent variable is the current dividend per share(DPSt), and independent variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt). We hypothesized that firms adjust partially the gap between the current dividend per share(DPSt) and the target payout ratio(Ω) each year, when the past dividend per share(DPSt-1) deviate from the target payout ratio(Ω). We secondly estimate the expansion model that extend the Lintner model by including the determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory. In the expansion model, dependent variable is the current dividend per share(DPSt), explanatory variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt), and control variables are the current capital expenditure ratio(CEAt), the current leverage ratio(LEVt), the current operating return on assets(ROAt), the current business risk(RISKt), the current trading volume turnover ratio(TURNt), and the current dividend premium(DPREMt). In these control variables, CEAt, LEVt, and ROAt are the determinants suggested by the residual dividend theory and the agency theory, ROAt and RISKt are the determinants suggested by the dividend signaling theory, TURNt is the determinant suggested by the transactions cost theory, and DPREMt is the determinant suggested by the catering theory. Furthermore, we thirdly estimate the Lintner model and the expansion model by using the panel data of the financial unconstrained firms and the financial constrained firms, that are divided into two groups according to level of financial constraints. We expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, because the former can finance more easily the investment funds through the market accessibility than the latter. We analyzed descriptive statistics such as mean, standard deviation, and median to delete the outliers from the panel data, conducted one way analysis of variance to check up the industry-specfic effects, and conducted difference test of firms characteristic variables between innovative SMEs and noninnovative SMEs as well as difference test of firms characteristic variables between financial unconstrained firms and financial constrained firms. We also conducted the correlation analysis and the variance inflation factors analysis to detect any multicollinearity among the independent variables. Both of the correlation coefficients and the variance inflation factors are roughly low to the extent that may be ignored the multicollinearity among the independent variables. Furthermore, we estimate both of the Lintner model and the expansion model using the panel regression analysis. We firstly test the time-specific effects and the firm-specific effects may be involved in our panel data through the Lagrange multiplier test that was proposed by Breusch and Pagan(1980), and secondly conduct Hausman test to prove that fixed effect model is fitter with our panel data than the random effect model. The main results of this study can be summarized as follows. The determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory explain significantly the dividend policy of the innovative SMEs. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between the current payout ratio and the target payout ratio each year. In the core variables of Lintner model, the past dividend per share has more effects to dividend smoothing than the current earnings per share. These results suggest that the innovative SMEs maintain stable and long run dividend policy which sustains the past dividend per share level without corporate special reasons. The main results show that dividend adjustment speed of the innovative SMEs is faster than that of the noninnovative SMEs. This means that the innovative SMEs with high level of R&D intensity can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. The other main results show that dividend adjustment speed of the financial unconstrained SMEs is faster than that of the financial constrained SMEs. This means that the financial unconstrained firms with high accessibility to capital market can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Futhermore, the other additional results show that dividend adjustment speed of the innovative SMEs classified by the Small and Medium Business Administration is faster than that of the unclassified SMEs. They are linked with various financial policies and services such as credit guaranteed service, policy fund for SMEs, venture investment fund, insurance program, and so on. In conclusion, the past dividend per share and the current earnings per share suggested by the Lintner model explain mainly dividend adjustment speed of the innovative SMEs, and also the financial constraints explain partially. Therefore, if managers can properly understand of the relations between financial constraints and dividend smoothing of innovative SMEs, they can maintain stable and long run dividend policy of the innovative SMEs through dividend smoothing. These are encouraging results for Korea government, that is, the Small and Medium Business Administration as it has implemented many policies to commit to the innovative SMEs. This paper may have a few limitations because it may be only early study about the relations between financial constraints and dividend smoothing of the innovative SMEs. Specifically, this paper may not adequately capture all of the subtle features of the innovative SMEs and the financial unconstrained SMEs. Therefore, we think that it is necessary to expand sample firms and control variables, and use more elaborate analysis methods in the future studies.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.

Estimation of Productivity for Quercus variabilis Stand by Forest Environmental Factors (삼림환경인자(森林環境因子)에 의한 굴참나무임분(林分)의 생산력추정(生産力推定))

  • Lee, Dong Sup;Chung, Young Gwan
    • Journal of Korean Society of Forest Science
    • /
    • v.75 no.1
    • /
    • pp.1-18
    • /
    • 1986
  • This study was initiated to estimate productivity of Quercus variabilis stand. However the practical objective of this study was to provide some information to establish the basis of selecting the suitable site for Quercus variabilis. The productivity measured in terms of DBH, height, basal area and stem volume was hypothesized, respectively, to be a function of a group of factors. This study considered 32 factors, 20 of which were related to the forest environmental factors such as tree age, latitude, percent slope, etc. and the rest of which were related to soil factors such as soil moisture, total nitrogen, available $P_2O_5$, etc. The data on 4 productivity measurements of Quercus variabilis growth and related factors cited were collected from 99 sample plots in Kyeongbook and chungbook provinces. Some factors considered were, in nature, discrete variables and the others continuous variables. Each kind of factor was classified into 3 or 4 categories and total numbers of such categories were eventually amounted to 110. Then each category was treated as an independent variable. This is amounted to saying that individual variable was treated a dummy variable and assigned a value 1 or 0. However the first category of each factor was deleted from the normal equation for statistical consideration. First of all, each of 4 productivity measurements of Quercus variabilis growth was regressed and, at the same time, those 110 categories. Secondly, the partial correlation coefficients were measured between each pair of 4 productivity measurements and 32 individual foctors. Finally, the relative scores were estimated in order to derive the category ranges. The result of these statistical analyses could be summarized as follows: 1) Growth measurement in terms of height seems to be a more significant criterion for estimation of productivity of Quercus variabilis. 2) Productivity of forest on stocked land may better be estimated in terms of forest environmental factors, on the other hand, that of unstocked land may be estimated in terms of physio-chemical factors of soil. 3) The factors that a strongly positive relation to all growth factors of tree are age group, effective soil, soil moisture, etc. This implies that these factors might effectively be used for criteria for selecting the suitable site for Quercus variabilis. 4) Parent rock, latitude, total nitrogen, age group, effective soil depth, soil moisture, organic matter, etc., had more significant category range for tree growth. Therefore, the suitable site for Quercus variabilis may be selected, based on this information. In conclusion, the above results obtained by the multivariable analysis can be not only the important criteria for estimating the growth of Quercus variabilis but also the useful guidance for selecting the suitable sites and performing the rational of Quercus variabilis forest.

  • PDF

Synthesis of a Dopamine Transporter Imaging Agent, N-(3-[$^{18}F$]fluoropropyl)-$2{\beta}$-carbomethoxy-$3{\beta}$-(4-iodophenyl)nortropane (도파민운반체 방사성추적자 N-(3-[$^{18}F$Fluoropropyl)-$2{\beta}$-carbomethoxy-$3{\beta}$-(4-iodophenyl)nortropane의 합성)

  • Choe, Yearn-Seong;Oh, Seung-Jun;Chi, Dae-Yoon;Kim, Sang-Eun;Choi, Yong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.3
    • /
    • pp.298-305
    • /
    • 1999
  • Purpose: N-(3-[$^{18}F$]Fluoropropyl)-$2{\beta}$-carbomethoxy-$3{\beta}$-(4-iodophenyl)nortropane [$^{18}F$]FP-CIT) has been shown to be very useful for imaging the dopamine transporter. However, synthesis of this radiotracer is somewhat troublesome. In this study, we used a new method for the preparation of [$^{18}F$]FP-CIT to increase radiochemical yield and effective specific activity. Materials and Methods: [$^{18}F$]FP-CIT was prepared by N-alkylation of nor-${\beta}$-CIT (2 mg) with 3-bromo-1-[$^{18}F$]fluoropropane in the presence of $Et_3N$ (5-6 drops of $DMF/CH_3CN$, $140^{\circ}C$, 20 min). 3-Bromo-1-[$^{18}F$]fluoropropane was synthesized from $5{\mu}L$ of 3-bromo-1-trifluoromethanesulfonyloxypropane (3-bromopropyl-1-triflate) and $nBu_4N^{18}F$ at $80^{\circ}C$. The final compound was purified by reverse phase HPLC and formulated in 13% ethanol in saline. Results: 3-Bromo-1-[$^{18}F$]fluoropropane was obtained from 3-bromopropyl-1-triflate and $nBu_4N^{18}F$ in 77-80% yield. N-Alkylation of nor-${\beta}$-CIT with 3-bromo-1-[$^{18}F$]fluoropropane was carried out at $140^{\circ}C$ using acetonitrile containing a small volume of DMF as the solvents. The overall yield of [$^{18}F$]FP-CIT was 5-10% (decay-corrected) with a radiochemical purity higher than 99% and effective specific activity higher than the one reported in the literature based on their HPLC data. The final [$^{18}F$]FP-CIT solution had the optimal pH (7.0) and it was pyrogen-free. Conclusion: In this study, 3-bromopropyl-1-triflate was used as the precursor for the [$^{18}F$]fluorination reaction and new conditions were developed for purification of [$^{18}F$]FP-CIT by HPLC. We established this new method for the preparation of [$^{18}F$]FP-CIT, which gave high effective specific activity and relatively good yield.

  • PDF