• Title/Summary/Keyword: Target

Search Result 24,908, Processing Time 0.049 seconds

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.

Activation of NF-${\kappa}B$ in Lung Cancer Cell Lines in Basal and TNF-${\alpha}$ Stimulated States (폐암 세포에서 기저 상태와 TNF-${\alpha}$ 자극 시 NF-${\kappa}B$의 활성화)

  • HwangBo, Bin;Lee, Seung-Hee;Lee, Choon-Taek;Yoo, Chul-Gyu;Han, Sung-Koo;Shim, Young-Soo;Kim, Young-Whan
    • Tuberculosis and Respiratory Diseases
    • /
    • v.52 no.5
    • /
    • pp.485-496
    • /
    • 2002
  • Background : The NF-${\kappa}B$ transcription factors control various biological processes including the immune response, acute phase reaction and cell cycle regulation. NF-${\kappa}B$ complexes are retained in the cytoplasm in the basal state and various stimuli cause a translocation of the NF-${\kappa}B$ complexes into the nucleus where they bind to the ${\kappa}B$ elements and regulate the transcription of the target genes. Recent reports also suggest that NF-${\kappa}B$ proteins are involved in oncogenesis, tumor growth and metastasis. High expression of NF-${\kappa}B$ expression was reported in many cancer cell lines and tissues. The constitutive activation of NF-${\kappa}B$ was also reported in several cancer cell lines supporting its role in cancer development and survival. The anti-apoptotic action of NF-${\kappa}B$ is important for cancer survival. NF-${\kappa}B$ also controls the expression of several proteins that are important for cellular adhesion (ICAM-1, VCAM-1) suggesting a role in cancer metastasis. In lung cancer, high expression levels of the NF-${\kappa}B$ subunit p50 and c-Rel were reported. In fact, high expression does not mean a high activity, and the activation pattern of NF-${\kappa}B$ in lung cancer has not been reported. Materials and Methods : In this study, the NF-${\kappa}B$ nuclear binding activity in the basal and TNF-${\alpha}$ stimulated states were exmined in various lung cancer cell lines and compared with the normal bronchial epithelial cell line. Twelve lung cancer cell lines including the non-small cell and small cell lung cancer cell lines (A549, NCI-H358, NCI-H441, NCI-H552, NCI-H2009, NCI-H460, NCI-H1229, NCI-H1703, NCI-H157, NCI-H187, NCI-H417, NCI-H526) and BEAS-2B bronchial epithelial cell line were used. To evaluate the NF-${\kappa}B$ expression and DNA binding activity, western blot analysis and an electrophoretic mobility shift assay with the nuclear protein extracts. Results : The basal expressions of the p65 and p50 subunits were observed in the BEAS-2B cell line and all lung cancer cell lines except for NCI-H358 and NCI-H460. The expression levels of p65 and p50 were increased 30 minutes after stimulation with TNF-${\alpha}$ in BEAS-2B and in 10 lung cancer cell lines. In the NCI-H358 and NCI-H460 cell lines, p65 expression was not observed in the basal and stimulated states and the two p50 related protein levels were higher after stimulation with TNF-${\alpha}$ These new proteins were smaller than p50 and are thought to be variants of p50. In the basal state, NF-${\kappa}B$ was nearly activated in the BEAS-2B and all lung cancer cell lines. The DNA binding activity of the NF-${\kappa}B$ complexes was markedly higher after stimulation with TNF-${\alpha}$ In the BEAS-2B and all lung cancer cell line except for NCI-H358 and NCI-H460, the activated NF-${\kappa}B$ complex was a p65/p50 heterodimer. In the NCI-H358 and NCI-H460 lung cancer cell lines, the NF-${\kappa}B$ complex was variant of a p50/p50 homodimer. Conclusion : The NF-${\kappa}B$ activation pattern in the lung cancer cell lines and the normal bronchial epithelial cell lines was similar except for the activation of a variant of the p50/p50 homodimer in some lung cancer cell linse.

Comparison of CT based-CTV plan and CT based-ICRU38 plan in Brachytherapy Planning of Uterine Cervix Cancer (자궁경부암 강내조사 시 CT를 이용한 CTV에 근거한 치료계획과 ICRU 38에 근거한 치료계획의 비교)

  • Cho, Jung-Ken;Han, Tae-Jong
    • Journal of Radiation Protection and Research
    • /
    • v.32 no.3
    • /
    • pp.105-110
    • /
    • 2007
  • Purpose : In spite of recent remarkable improvement of diagnostic imaging modalities such as CT, MRI, and PET and radiation therapy planing systems, ICR plan of uterine cervix cancer, based on recommendation of ICRU38(2D film-based) such as Point A, is still used widely. A 3-dimensional ICR plan based on CT image provides dose-volume histogram(DVH) information of the tumor and normal tissue. In this study, we compared tumor-dose, rectal-dose and bladder-dose through an analysis of DVH between CTV plan and ICRU38 plan based on CT image. Method and Material : We analyzed 11 patients with a cervix cancer who received the ICR of Ir-192 HDR. After 40Gy of external beam radiation therapy, ICR plan was established using PLATO(Nucletron) v.14.2 planing system. CT scan was done to all the patients using CT-simulator(Ultra Z, Philips). We contoured CTV, rectum and bladder on the CT image and established CTV plan which delivers the 100% dose to CTV and ICRU plan which delivers the 100% dose to the point A. Result : The volume$(average{\pm}SD)$ of CTV, rectum and bladder in all of 11 patients is $21.8{\pm}6.6cm^3,\;60.9{\pm}25.0cm^3,\;111.6{\pm}40.1cm^3$ respectively. The volume covered by 100% isodose curve is $126.7{\pm}18.9cm^3$ in ICRU plan and $98.2{\pm}74.5cm^3$ in CTV plan(p=0.0001), respectively. In (On) ICRU planning, $22.0cm^3$ of CTV volume was not covered by 100% isodose curve in one patient whose residual tumor size is greater than 4cm, while more than 100% dose was irradiated unnecessarily to the normal organ of $62.2{\pm}4.8cm^3$ other than the tumor in the remaining 10 patients with a residual tumor less than 4cm in size. Bladder dose recommended by ICRU 38 was $90.1{\pm}21.3%$ and $68.7{\pm}26.6%$ in ICRU plan and in CTV plan respectively(p=0.001) while rectal dose recommended by ICRU 38 was $86.4{\pm}18.3%$ and $76.9{\pm}15.6%$ in ICRU plan and in CTV plan, respectively(p=0.08). Bladder and rectum maximum dose was $137.2{\pm}50.1%,\;101.1{\pm}41.8%$ in ICRU plan and $107.6{\pm}47.9%,\;86.9{\pm}30.8%$ in CTV plan, respectively. Therefore, the radiation dose to normal organ was lower in CTV plan than in ICRU plan. But the normal tissue dose was remarkably higher than a recommended dose in CTV plan in one patient whose residual tumor size was greater than 4cm. The volume of rectum receiving more than 80% isodose (V80rec) was $1.8{\pm}2.4cm^3$ in ICRU plan and $0.7{\pm}1.0cm^3$ in CTV plan(p=0.02). The volume of bladder receiving more than 80% isodose(V80bla) was $12.2{\pm}8.9cm^3$ in ICRU plan and $3.5{\pm}4.1cm^3$ in CTV plan(p=0.005). According to these parameters, CTV plan could also save more normal tissue compared to ICRU38 plan. Conclusion : An unnecessary excessive radiation dose is irradiated to normal tissues within 100% isodose area in the traditional ICRU plan in case of a small size of cervix cancer, but if we use CTV plan based on CT image, the normal tissue dose could be reduced remarkably without a compromise of tumor dose. However, in a large tumor case, we need more research on an effective 3D-planing to reduce the normal tissue dose.

The National Survey of Open Lung Biopsy and Thoracoscopic Lung Biopsy in Korea (개흉 및 흉강경항폐생검의 전국실태조사)

  • 대한결핵 및 호흡기학회 학술위원회
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.1
    • /
    • pp.5-19
    • /
    • 1998
  • Introduction: Direct histologic and bacteriologic examination of a representative specimen of lung tissue is the only certain method of providing an accurate diagnosis in various pulmonary diseases including diffuse pulmonary diseases. The purpose of national survey was to define the indication, incidence, effectiveness, safety and complication of open and thoracoscopic lung biopsy in korea. Methods: A multicenter registry of 37 university or general hospitals equipped more than 400 patient's bed were retrospectively collected and analyzed for 3 years from the January 1994 to December 1996 using the same registry protocol. Results: 1) There were 511 cases from the 37 hospitals during 3 years. The mean age was 50.2 years(${\pm}15.1$ years) and men was more prevalent than women(54.9% vs 45.9%). 2) The open lung biopsy was performed in 313 cases(62%) and thoracoscopic lung biopsy was performed in 192 cases(38%). The incidence of lung biopsy was more higher in diffuse lung disease(305 cases, 59.7%) than in localized lung disease(206 cases, 40.3%) 3) The duration after abnormalities was found in chest X-ray until lung biopsy was 82.4 days(open lung biopsy: 72.8 days, thoracoscopic lung biopsy: 99.4 days). The bronchoscopy was performed in 272 cases(53.2%), bronchoalveolar lavage was performed in 123 cases(24.1%) and percutaneous lung biopsy was performed in 72 cases(14.1%) before open or thoracoscopic lung biopsy. 4) There were 230 cases(45.0%) of interstitial lung disease, 133 cases(26.0%) of thoracic malignancies, 118 cases(23.1%) of infectious lung disease including tuberculosis and 30 cases (5.9 %) of other lung diseases including congenital anomalies. No significant differences were noted in diagnostic rate and disease characteristics between open lung biopsy and thoracoscopic lung biopsy. 5) The final diagnosis through an open or thoracoscopic lung biopsy was as same as the presumptive diagnosis before the biopsy in 302 cases(59.2%). The identical diagnostic rate was 66.5% in interstitial lung diseases, 58.7% in thoracic malignancies, 32.7% in lung infections, 55.1 % in pulmonary tuberculosis, 62.5% in other lung diseases including congenital anomalies. 6) One days after lung biopsy, $PaCO_2$ was increased from the prebiopsy level of $38.9{\pm}5.8mmHg$ to the $40.2{\pm}7.1mmHg$(P<0.05) and $PaO_2/FiO_2$ was decreased from the prebiopsy level of $380.3{\pm}109.3mmHg$ to the $339.2{\pm}138.2mmHg$(P=0.01). 7) There was a 10.1 % of complication after lung biopsy. The complication rate in open lung biopsy was much higher than in thoracoscopic lung biopsy(12.4% vs 5.8%, P<0.05). The incidence of complication was pneumothorax(23 cases, 4.6%), hemothorax(7 cases, 1.4%), death(6 cases, 1.2%) and others(15 cases, 2.9%). 8) The 5 cases of death due to lung biopsy were associated with open lung biopsy and one fatal case did not describe the method of lung biopsy. The underlying disease was 3 cases of thoracic malignancies(2 cases of bronchoalveolar cell cancer and one malignant mesothelioma), 2 cases of metastatic lung cancer, and one interstitial lung disease. The duration between open lung biopsy and death was $15.5{\pm}9.9$ days. 9) Despite the lung biopsy, 19 cases (3.7%) could not diagnosed. These findings were caused by biopsy was taken other than target lesion(5 cases), too small size to interpretate(3 cases), pathologic inability(11 cases). 10) The contribution of open or thoracoscopic lung biopsy to the final diagnosis was defininitely helpful(334 cases, 66.5%), moderately helpful(140 cases, 27.9%), not helpful or impossible to judge(28 cases, 5.6%). Overall, open or thoracoscopic lung biopsy were helpful to diagnose the lung lesion in 94.4 % of total cases. Conclusions: The open or thoracoscopic lung biopsy were relatively safe and reliable diagnostic method of lung lesion which could not diagnosed by other diagnostic approaches such as bronchoscopy. We recommend the thoracoscopic lung biopsy when the patients were in critical condition because the thoracoscopic biopsy was more safe and have equal diagnostic results compared with the open lung biopsy.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.