• Title/Summary/Keyword: Optimization program

Search Result 1,061, Processing Time 0.026 seconds

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

The Fabrication and Characteristic for Narrow-band Pass Color-filter Deposited by Ti3O5/SiO2 Multilayer (Ti3O5/SiO2 다층박막를 이용한 협대역 칼라투과필터 제작 및 특성연구)

  • Park, Moon-Chan;Ko, Kyun-Chae;Lee, Wha-Ja
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.4
    • /
    • pp.357-362
    • /
    • 2011
  • Purpose: The narrow-band pass color-filters with a 500 nm central wavelength and 12 nm FWHM using $Ti_3O_5/SiO_2$ mutilayer were fabricated, and their characteristics and structures were studied. Methods: the optical constants, n and k, of the $Ti_3O_5$ and $SiO_2$ thin films were obtained from the transmittances of their thin film. The narrow-band pass color-filters were designed with these optical constants and the AR coating of the filter was also designed. $Ti_3O_5/SiO_2$ multilayer filters were made by electron beam evaporation apparatus and the transmittaces of the filters were measured by spectrophotometer. the number of layers and the thicknesses of filters were calculated from the cross section of filters by SEM image and the composition of filters was analysed by XPS analysis. Results: The optimization of AR coating for the narrow-band pass color-filter was [air$|SiO_2(90)|Ti_3O_5(36)|SiO_2(5)|Ti_3O_5(73)|SiO_2(30)|Ti_3O_5(15)|$ glass], and the optimization of filter layer for the color filter was [air$|SiO_2(192)|Ti_3O_5(64)|SiO_2(102)|Ti_3O_5(66)|SiO_2(112)|Ti_3O_5(74)|SiO_2(120)|Ti_3O_5(68)|SiO_2(123)|Ti_3O_5(80)|SiO_2(109)|Ti_3O_5(70)|SiO_2(105)|Ti_3O_5(62)|SiO_2(99)|Ti_3O_5(63)|SiO_2(98)|Ti_3O_5(51)|SiO_2(60)|Ti_3O_5(42)|SiO_2(113)|Ti_3O_5(88)|SiO_2(116)|Ti_3O_5(68)|SiO_2(89)|Ti_3O_5(49)|SiO_2(77)|Ti_3O_5(48)|SiO_2(84)|Ti_3O_5(51)|SiO_2(85)|Ti_3O_5(48)|SiO_2(59)|Ti_3O_5(34)|SiO_2(71)|Ti_3O_5(44)|SiO_2(65)|Ti_3O_5(45)|SiO_2(81)|Ti_3O_5(52)|SiO_2(88)|$ glass]. It was known that the color-filters fabricated by the simulation data were composed of 41 layers by SEM image and the top layer of filters was $SiO_2$ layer and the filters were composed of $SiO_2$/$Ti_3O_5$ multilayer by XPS analysis. It was also known that the mixed thin film of TiO2 and $Ti_3O_5$ was made during the deposition of the $Ti_3O_5$ material. Conclusions: The narrow-band pass color-filters with a 500 nm central wavelength and 12 nm FWHM using $Ti_3O_5/SiO_2$ mutilayer of 41 layer were fabricated, and it was known that the mixed form of TiO2 and $Ti_3O_5$ thin film was made during the deposition of the $Ti_3O_5$ material.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Evaluation of Dose Distributions Recalculated with Per-field Measurement Data under the Condition of Respiratory Motion during IMRT for Liver Cancer (간암 환자의 세기조절방사선치료 시 호흡에 의한 움직임 조건에서 측정된 조사면 별 선량결과를 기반으로 재계산한 체내 선량분포 평가)

  • Song, Ju-Young;Kim, Yong-Hyeob;Jeong, Jae-Uk;Yoon, Mee Sun;Ahn, Sung-Ja;Chung, Woong-Ki;Nam, Taek-Keun
    • Progress in Medical Physics
    • /
    • v.25 no.2
    • /
    • pp.79-88
    • /
    • 2014
  • The dose distributions within the real volumes of tumor targets and critical organs during internal target volume-based intensity-modulated radiation therapy (ITV-IMRT) for liver cancer were recalculated by applying the effects of actual respiratory organ motion, and the dosimetric features were analyzed through comparison with gating IMRT (Gate-IMRT) plan results. The ITV was created using MIM software, and a moving phantom was used to simulate respiratory motion. The doses were recalculated with a 3 dose-volume histogram (3DVH) program based on the per-field data measured with a MapCHECK2 2-dimensional diode detector array. Although a sufficient prescription dose covered the PTV during ITV-IMRT delivery, the dose homogeneity in the PTV was inferior to that with the Gate-IMRT plan. We confirmed that there were higher doses to the organs-at-risk (OARs) with ITV-IMRT, as expected when using an enlarged field, but the increased dose to the spinal cord was not significant and the increased doses to the liver and kidney could be considered as minor when the reinforced constraints were applied during IMRT plan optimization. Because the Gate-IMRT method also has disadvantages such as unsuspected dosimetric variations when applying the gating system and an increased treatment time, it is better to perform a prior analysis of the patient's respiratory condition and the importance and fulfillment of the IMRT plan dose constraints in order to select an optimal IMRT method with which to correct the respiratory organ motional effect.

A Survey on the Preferences and Recognition of Multigrain Rice by Adding Grains and Legumes (곡류와 두류를 혼합한 잡곡밥의 기호도 및 인식 조사)

  • Jang, Hye-Lim;Im, Hee-Jin;Lee, Yu-Jin;Kim, Kun-Woo;Yoon, Kyung-Young
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.41 no.6
    • /
    • pp.853-860
    • /
    • 2012
  • This study investigated the preference and recognition of cooked rice mixed with multigrains. The data for the analysis was collected from 464 people who were residing in Seoul, Gyeongsang and Jeolla area, and analyzed by the SPSS 18.0 program. The result showed that 77.8% of the respondents liked cooked rice mixed with multigrain, showing significant difference by age (p<0.001) and occupation (p<0.01). Of the respondents, 33.8% consumed cooked rice mixed with multigrains at least once a day, showing significant difference by gender (p<0.01), age (p<0.001) and occupation (p<0.001). The most popular type of grains to mix with rice were, in order, black rice (3.8)> brown rice (3.7)> barley (3.7)> proso millet (3.4)> foxtail millet (3.4)> SoRiTae (3.3)> sorghum (3.2)> adlay (3.2)> mung bean (3.1)> buckwheat (3.0)> BacTae (2.8). A total of 32.5% of the respondents answered that 21~30% was proper mixing ratio of multigrains-added cooked rice, showing age (p<0.001), occupation (p<0.001) and resident area (p<0.05). Three or four kinds of grains were preferred to mix cooked rice, showing significant difference by age and occupation (p<0.001). Of the respondents, 43.1% chose price reduction as the most desired improvement of multigrains in the market. Most of the subjects had affirmative view intake of cooked rice mixed with multigrains, but recognized that multigrains were expensive. From these results, this study will provide basic information for the increased availability of multigrains and optimization of the multigrain ratio mix.

Recent Progress in Air Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2007 (설비공학 분야의 최근 연구 동향 : 2007년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwa-Taik;Shin, Dong-Sin;Choi, Chang-Ho;Lee, Dae-Young;Kim, Seo-Young;Kwon, Yong-Il
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.20 no.12
    • /
    • pp.844-861
    • /
    • 2008
  • The papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during the year of 2007 have been reviewed. Focus has been put on current status of research in the aspect of heating, cooling, ventilation, sanitation and building environments. The conclusions are as follows. (1) The research trends of fluid engineering have been surveyed as groups of general fluid flow, fluid machinery and piping, etc. New research topics include micro nano fluid, micropump and fuel cell. Traditional CFD was still popular and widely used in research and development. Studies about fans and pumps were performed in the field of fluid machinery. Characteristics of flow and fin shape optimization are studied in the field of piping system. (2) The research works on heat transfer have been reviewed in the field of heat transfer characteristics, heat exchangers, and desiccant cooling systems. The research on heat transfer characteristics includes thermal transport in pulse tubes, high temperature superconductors, ground heat exchangers, fuel cell stacks and ice slurry systems. For the heat 'exchangers, the research on pin-tube heat exchanger, plate heat exchanger, condensers and gas coolers has been cordially implemented. The research works on heat transfer augmenting tubes have been also reported. For the desiccant cooling systems, the studies on the design and operating conditions for desiccant rotors as well as performance index are noticeable. (3) In the field of refrigeration, many papers were presented on the air conditioning system using CO2 as a refrigerant. The issues on the two-stage compression, the oil selection, and the appropriate oil charge were treated. The subjects of alternative refrigerants were also studied steadily. Hydrocarbons, DME and their mixtures were considered and various heat transfer correlations were proposed. (4) Research papers have been reviewed in the field of building facilities by grouping into the researches on heat and cold sources, air conditioning and air cleaning, ventilation and fire research including tunnel ventilation, flow control of piping system, and sound research with drain system. Main focuses have been addressed to the promotion of efficient or effective use of energy, which helps to save energy and results in reduced environmental pollution and operating cost. (5) Studies were mostly focused on analyzing the indoor environment in various spaces like cars, old tombs, machine rooms, and etc. in an architectural environmental field. Moreover, subjects of various fields such as the evaluation of noise, thermal environment, indoor air quality and development of energy analysis program were researched by various methods of survey, simulation, and field experiment.

Optimization Process Models of Gas Combined Cycle CHP Using Renewable Energy Hybrid System in Industrial Complex (산업단지 내 CHP Hybrid System 최적화 모델에 관한 연구)

  • Oh, Kwang Min;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.3
    • /
    • pp.65-79
    • /
    • 2019
  • The study attempted to estimate the optimal facility capacity by combining renewable energy sources that can be connected with gas CHP in industrial complexes. In particular, we reviewed industrial complexes subject to energy use plan from 2013 to 2016. Although the regional designation was excluded, Sejong industrial complex, which has a fuel usage of 38 thousand TOE annually and a high heat density of $92.6Gcal/km^2{\cdot}h$, was selected for research. And we analyzed the optimal operation model of CHP Hybrid System linking fuel cell and photovoltaic power generation using HOMER Pro, a renewable energy hybrid system economic analysis program. In addition, in order to improve the reliability of the research by analyzing not only the heat demand but also the heat demand patterns for the dominant sectors in the thermal energy, the main supply energy source of CHP, the economic benefits were added to compare the relative benefits. As a result, the total indirect heat demand of Sejong industrial complex under construction was 378,282 Gcal per year, of which paper industry accounted for 77.7%, which is 293,754 Gcal per year. For the entire industrial complex indirect heat demand, a single CHP has an optimal capacity of 30,000 kW. In this case, CHP shares 275,707 Gcal and 72.8% of heat production, while peak load boiler PLB shares 103,240 Gcal and 27.2%. In the CHP, fuel cell, and photovoltaic combinations, the optimum capacity is 30,000 kW, 5,000 kW, and 1,980 kW, respectively. At this time, CHP shared 275,940 Gcal, 72.8%, fuel cell 12,390 Gcal, 3.3%, and PLB 90,620 Gcal, 23.9%. The CHP capacity was not reduced because an uneconomical alternative was found that required excessive operation of the PLB for insufficient heat production resulting from the CHP capacity reduction. On the other hand, in terms of indirect heat demand for the paper industry, which is the dominant industry, the optimal capacity of CHP, fuel cell, and photovoltaic combination is 25,000 kW, 5,000 kW, and 2,000 kW. The heat production was analyzed to be CHP 225,053 Gcal, 76.5%, fuel cell 11,215 Gcal, 3.8%, PLB 58,012 Gcal, 19.7%. However, the economic analysis results of the current electricity market and gas market confirm that the return on investment is impossible. However, we confirmed that the CHP Hybrid System, which combines CHP, fuel cell, and solar power, can improve management conditions of about KRW 9.3 billion annually for a single CHP system.

Optimization of GFR value according to Kidney Depth Measurement Methods (신장 Depth 측정 방법에 따른 GFR 값의 최적화)

  • Kwon, Hyeong-Jin;Moon, Il-Sang;Noh, Gyeong Woon;Kang, Keon Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.25-28
    • /
    • 2019
  • Purpose In patients with unusual kidney position after $^{99m}Tc-DTPA$ renal dynamic imaging study, the GFR(Glomerular Filtration Rate) values are significantly different according to the depth of the kidney. Thus, we tried to compare the difference of the GFR values between the depth measurement methods and in-vitro test. 30 adult patients who were subjected to renal study. 27 patients were in usual position and 3 patients were in unusual. $555{\pm}37MBq$ of $^{99m}Tc-DTPA$ was administrated to all patients. GE infinia gamma camera was used. GFR values were obtained in-vivo(gates method) and in-vitro(blood). The kidney depth in-vivo was calculated by three methods(tonnensen, manual, taylor). In-vitro, GFR was performed by blood test. Differences in the mean values of GFR and correlation between depth and GFR values were evaluated using the SPSS 12.0 statistical program. The GFR values for 27 patients with kidney in the usual position are as follows(1.tonnensen 2.manual 3.taylor 4.invitro); $69.3{\pm}4.2$, $88.2{\pm}5.6$, $77.8{\pm}4.3$, $82.2{\pm}5.8ml/min$. The three unusual cases are as follows, first(congenital renal anomaly): 66.4, 101.24, 69.07, 94.8 ml/min. second(transplantation kidney): 12.22, 29.99, 19.36, 23.5 ml/min. third(horseshoe kidney): 37.37, 93.54, 35.9, 92.5 ml/min. There was a difference between tonnensen and manual in the usual position of the kidney(p<0.05). There was no significant difference between the other methods. However, there was a significant difference in case of the unusual position of the kidneys. Correlation analysis between both kidney depth and GFR value shows person correlation as follows; Rt kidney: 0.298, Lt kidney: 0.322. When compared with the GFR values in-vitro test, it was useful to calculate the GFR value by measuring the kidney depth using a manual formula in the unusual position of the kidneys. GFR values and kidney depth were significantly related.

A study on the feasibility evaluation technique of urban utility tunnel by using quantitative indexes evaluation and benefit·cost analysis (정량적 지표평가와 비용·편익 분석을 활용한 도심지 공동구의 타당성 평가기법 연구)

  • Lee, Seong-Won;Chung, Jee-Seung;Na, Gwi-Tae;Bang, Myung-Seok;Lee, Joung-Bae
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.1
    • /
    • pp.61-77
    • /
    • 2019
  • If a new utility tunnel is planned for high density existing urban areas in Korea, a rational decision-making process such as the determination of optimum design capacity by using the feasibility evaluation system based on quantitative evaluation indexes and the economic evaluation is needed. Thus, the previous study presented the important weight of individual higher-level indexes (3 items) and sub-indexes (16 items) through a hierarchy analysis (AHP) for quantitative evaluation index items, considering the characteristics of each urban type. In addition, an economic evaluation method was proposed considering 10 benefit items and 8 cost items by adding 3 new items, including the effects of traffic accidents, noise reduction and socio-economic losses, to the existing items for the benefit cost analysis suitable for urban utility tunnels. This study presented a quantitative feasibility evaluation method using the important weight of 16 sub-index items such as the road management sector, public facilities sector and urban environment sector. Afterwards, the results of quantitative feasibility and economic evaluation were compared and analyzed in 123 main road sections of the Seoul. In addition, a comprehensive evaluation method was proposed by the combination of the two evaluation results. The design capacity optimization program, which will be developed by programming the logic of the quantitative feasibility and economic evaluation system presented in this study, will be utilized in the planning and design phases of urban community zones and will ultimately contribute to the vitalization of urban utility tunnels.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.