• Title/Summary/Keyword: input data

Search Result 8,315, Processing Time 0.044 seconds

Minimum Wage and Productivity: Analysis of Manufacturing Industry in Korea (최저임금과 생산성: 우리나라 제조업의 사례)

  • Kim, Kyoo Il;Ryuk, Seung Whan
    • Economic Analysis
    • /
    • v.26 no.1
    • /
    • pp.1-33
    • /
    • 2020
  • Recent discussions about a minimum wage increase (MWI) and its influence on the economy have mainly focused on the quantitative aspects, such as labor costs and employment. However, concerning the qualitative aspects, an MWI could have positive effects by enhancing firm productivity and crowding out marginal firms from the market. These positive effects of an MWI can offset, to some extent, its potential negative effects - increasing labor costs and decreasing employment, among others. In this regard we empirically examine the impact of an MWI on firm productivity (total factor productivity). Using firm level panel data from the manufacturing industry in Korea, we calculate the influence rates of a minimum wage by sector and by firm size (number of workers), and analyze its effects on firm productivity. In particular, the production functions of the firms are estimated by taking into account endogeneity among the input factors, in order to resolve the drawbacks of existing studies - underestimating the capital factor coefficient and overestimating the labor factor coefficient. This study finds that the influences of an MWI on wages, employment, and productivity are substantially different across sectors and firm sizes. While an MWI has shown to have positive influences on productivity growth in the manufacturing industry as a whole, each sector demonstrates a different direction of effect, and the degree of productivity change also varies by sector. The impacts of an MWI on firm productivity are generally estimated to be more negative for smaller firms, but in some sectors the effects are found to be positive. In addition, the wage increases resulting from an MWI seem to cause a productivity enhancement across all sectors in the manufacturing industry. The policy implications of this study are as follows. Considering the empirical findings that an MWI causes an increase in productivity in many sectors of the manufacturing industry, it would be desirable to take into consideration not only the negative side effects but also the positive effects of an MWI when designing any future minimum wage policy. Moreover, in spite of there being a uniform minimum wage, this study finds that the diverse influence rates of a minimum wage across firms have different impacts on wages, employment, and productivity across sectors or firm size. This finding could be conducive to discussions about differentiation among minimum wage schemes by sector or firm size.

Export Prediction Using Separated Learning Method and Recommendation of Potential Export Countries (분리학습 모델을 이용한 수출액 예측 및 수출 유망국가 추천)

  • Jang, Yeongjin;Won, Jongkwan;Lee, Chaerok
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.69-88
    • /
    • 2022
  • One of the characteristics of South Korea's economic structure is that it is highly dependent on exports. Thus, many businesses are closely related to the global economy and diplomatic situation. In addition, small and medium-sized enterprises(SMEs) specialized in exporting are struggling due to the spread of COVID-19. Therefore, this study aimed to develop a model to forecast exports for next year to support SMEs' export strategy and decision making. Also, this study proposed a strategy to recommend promising export countries of each item based on the forecasting model. We analyzed important variables used in previous studies such as country-specific, item-specific, and macro-economic variables and collected those variables to train our prediction model. Next, through the exploratory data analysis(EDA) it was found that exports, which is a target variable, have a highly skewed distribution. To deal with this issue and improve predictive performance, we suggest a separated learning method. In a separated learning method, the whole dataset is divided into homogeneous subgroups and a prediction algorithm is applied to each group. Thus, characteristics of each group can be more precisely trained using different input variables and algorithms. In this study, we divided the dataset into five subgroups based on the exports to decrease skewness of the target variable. After the separation, we found that each group has different characteristics in countries and goods. For example, In Group 1, most of the exporting countries are developing countries and the majority of exporting goods are low value products such as glass and prints. On the other hand, major exporting countries of South Korea such as China, USA, and Vietnam are included in Group 4 and Group 5 and most exporting goods in these groups are high value products. Then we used LightGBM(LGBM) and Exponential Moving Average(EMA) for prediction. Considering the characteristics of each group, models were built using LGBM for Group 1 to 4 and EMA for Group 5. To evaluate the performance of the model, we compare different model structures and algorithms. As a result, it was found that the separated learning model had best performance compared to other models. After the model was built, we also provided variable importance of each group using SHAP-value to add explainability of our model. Based on the prediction model, we proposed a second-stage recommendation strategy for potential export countries. In the first phase, BCG matrix was used to find Star and Question Mark markets that are expected to grow rapidly. In the second phase, we calculated scores for each country and recommendations were made according to ranking. Using this recommendation framework, potential export countries were selected and information about those countries for each item was presented. There are several implications of this study. First of all, most of the preceding studies have conducted research on the specific situation or country. However, this study use various variables and develops a machine learning model for a wide range of countries and items. Second, as to our knowledge, it is the first attempt to adopt a separated learning method for exports prediction. By separating the dataset into 5 homogeneous subgroups, we could enhance the predictive performance of the model. Also, more detailed explanation of models by group is provided using SHAP values. Lastly, this study has several practical implications. There are some platforms which serve trade information including KOTRA, but most of them are based on past data. Therefore, it is not easy for companies to predict future trends. By utilizing the model and recommendation strategy in this research, trade related services in each platform can be improved so that companies including SMEs can fully utilize the service when making strategies and decisions for exports.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Estimation of Chlorophyll-a Concentration in Nakdong River Using Machine Learning-Based Satellite Data and Water Quality, Hydrological, and Meteorological Factors (머신러닝 기반 위성영상과 수질·수문·기상 인자를 활용한 낙동강의 Chlorophyll-a 농도 추정)

  • Soryeon Park;Sanghun Son;Jaegu Bae;Doi Lee;Dongju Seo;Jinsoo Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.655-667
    • /
    • 2023
  • Algal bloom outbreaks are frequently reported around the world, and serious water pollution problems arise every year in Korea. It is necessary to protect the aquatic ecosystem through continuous management and rapid response. Many studies using satellite images are being conducted to estimate the concentration of chlorophyll-a (Chl-a), an indicator of algal bloom occurrence. However, machine learning models have recently been used because it is difficult to accurately calculate Chl-a due to the spectral characteristics and atmospheric correction errors that change depending on the water system. It is necessary to consider the factors affecting algal bloom as well as the satellite spectral index. Therefore, this study constructed a dataset by considering water quality, hydrological and meteorological factors, and sentinel-2 images in combination. Representative ensemble models random forest and extreme gradient boosting (XGBoost) were used to predict the concentration of Chl-a in eight weirs located on the Nakdong river over the past five years. R-squared score (R2), root mean square errors (RMSE), and mean absolute errors (MAE) were used as model evaluation indicators, and it was confirmed that R2 of XGBoost was 0.80, RMSE was 6.612, and MAE was 4.457. Shapley additive expansion analysis showed that water quality factors, suspended solids, biochemical oxygen demand, dissolved oxygen, and the band ratio using red edge bands were of high importance in both models. Various input data were confirmed to help improve model performance, and it seems that it can be applied to domestic and international algal bloom detection.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Sensory Information Processing

  • Yoshimoto, Chiyoshi
    • Journal of Biomedical Engineering Research
    • /
    • v.6 no.2
    • /
    • pp.1-8
    • /
    • 1985
  • The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70$\pm$1.32mmHg/min)compared to CF dialyzers(4.32$\pm$0.55mmHg/min)(p<0.05). However, there was no observable difference in the UFR between the two dialyzers. Neither APD nor UFR showed any significant increase with an increasing number of reuses for up to more than 20reuses. A substantial number of failures observed in APD(larger than 20mmHe/min)on the reused dialyzers(2 out of 40 CP and S out 26 C-DAK) were attributed to the Possible damage on the fibers. The CF 15-11 HFDs which failed APD test did not show changes in the UFR compared to normal dialyzers indicating that APD is a more sensitive test than UFR test to evaluate the integrity of the fibers. 30527 T00401030527 ^x For quantitative measurement of reflected light from a clinical diagnostic strip, a prototype old reflectance photometer was designed. The strip loader and cassette were made to obtain more accurate reflectance parameters. The strip was illuminated at 45˚c through optical fiber and the intensity of reflected light was determined at rectanguLat angle using a photodiode. The kubelka-munk coefficient and reflection optical density were determined ar four different wavelengths(500, 550, 570 and 610nm) for blood glucose strip. For higher concentration than 300mg/41 about glucose, a saturation state of abforbance was observed at 500, 550 and 570nm. The correlation between glucose concentration and parameters was the best at 610nm. 30535 T00401030535 ^x Radiation-induced fibrosarcoma tumors were grown on the flanks of C3H mice. The mice were divided into two groups. One group was injected with Photofrin II, intravenously (2.5mg/kg body weight). The other group received no Photofrin II. Mice from both groups were irradialed for approximately 15 minutes at 100, 300, or 500 mW/cm2 with the argon (488nm/514.5 nm), dye(628nm) and gold vapor (pulsed 628 nm) laser light. A photosensitizer behaved as an added absorber. Under our experimental conditions, the presence of Photolfrin II increased surface temperature by at least 40% and the temperature rise due to 300 mW/cm2 irradiation exceeded values for hyperthermia. Light and temperature distributions with depth were estimated by a computer model. The model demonstrated the influence of wavelength on the thermal process and proved to be a valuable tool to investigate internal temperature rise. 30536 T00401030536 ^x We investigated the structural geometry of thirty-eight Korean femurs. The purpose of this study is to identify major geometrical differences between Korean femurs 3nd others that we believe belong to Caucasians so that we would be able to get insights into the femoral component design that fits Asians including Koreans. We utilized computerized tomography (CT) images of femurs extracted from cadavers. The CT images were transformed into bitmap data by using a film scanner, and then analyzed by using a commercially available software called Image v.1.0 and a Macintosh IIci computer.The resulting data were compared with already published data. The major results show that the geometry of the Korean femurs is significantly different from that of Caucasians: (1) the anteversion angle and the canal flare index are greater by the amount of approximately 8˚ and 0.5, respectively, (2) the shape of the isthmus cross section is more round, and (3) the distance between the teaser trochanter and the proximal border of the isthmus is shelter by about 15 mm. The results suggested that the femoral component suitable for Asians should be different from the currently-used components designed and manufactured mostly by European or American companies. 30537 T00401030537 ^x It is well known that nonlinear propagation characteristics of the wave in the tissue may give very useful information for the medical diagnoisis. In this paper, a new method to detect nonlinear propagation characteristics of the internal vibration in the tissue for the low frequency mechanical vibration by using bispectral analysis is proposed. In the method, low frequency vibration of f0( = 100Hz) is applied on the surface of the object, and the waveform of the internal vibration x (t) is measured from Doppler frequency modulation of silmultaneously transmitted probing ultrasonic waves. Then, the bispectra of the signal x (t) at the frequencies (f0, f0) and (f0, 2f0) are calculated to estimate the nonlinear propagation characteristics as their magnitude ratio, w here since bispectrum is free from the gaussian additive noise we can get the value with high S/N. Basic experimental system is constructed by using 3.0 MHz probing ultrasonic waves and the several experiments are carried out for some phantoms. Results show the superiority of the proposed method to the conventional method using power spectrum and also its usefulness for the tissue characterization. 30541 T00401030541 ^x This paper describes the implementation of a computerized radial pulse diagnosis by aids of a clinical expert. On this base, we composed of the radial pulse diagnosis system in korean traditional medicine. The system composed of a radial pulse wave detection system and a radial pulse diagnosis system. With a detection system, we detected Inyoung and Cheongu radial pulse wave and processed it. Then, we have got the characteristic parameters of radial pulse wave and also quantified that according to the method of Inyoung-Cheongu Comparison Radial Pulse Diagnosis. We defined the jugement standard of radial pulse diagnosis system and then we confirmed the possibility for realization of automatic radial pulse diagnosis in korean traditional medicine. 30545 T00401030545 ^x Microspheres are expected to be applied to biomedical areas such as solid-phase immunoassays, drug delivery systems, immunomagnetic cell separation. To synthesize microspheres for biomedical application, "two stage shot growth method" was developed. The uniformity ratio of synthesized microspheres was always smaller than 1.05. And the surface charge density (or the number of ionizable functional groups) of the microspheres synthesized by "two stage shot growth method" was 6~13 times higher than that of the microspheres synthesized by conventional seeded batch copolymerization. As a previous step for biomedical application, adsorption experiments of bovine albumin on microspheres were carried out under various conditions. The maximum adsorbed amount was obtained in the neighborhood of pH 4.5. Isoelectric point of bovine albumin is pH 5.0, so experimental result shows that it shifted to acid area. The adsorption isotherm was obtained, the plateau region was always reached at 2.Og/L (bulk concentration of bovine albumin).The effect of the kind and the amount of surface functional group was also examined. 30575 T00401030575 ^x A medical image workstation was developed using multimedia technique. The system based on PC-486DX was designed to acquire medical images produced by medical imaging instruments and related audio information, that is, doctors' reporting results. Input information was processed and analyzed, then the results were presented in the form of graph and animation. All the informations of the system were hierarchically related with the image as the apex. Processing and analysis algorithms were implemented so that the diagnostic accuracy could be improved. The diagnosed information can be transferred for patient diagnosis through LAN(local area network). 30592 T00401030592 ^x In the conventional infrared imaging system, complex infrared lens systems are usually used for directing collimated narrow infrared beams into the high speed 2-dimensional optic scanner. In this paper, a simple reflective infrared optic system with a 2-dimensional optic scanner is proposed for the realization of medical infrared thermography system. It has been experimentally proven that the intfrared thermography system composed of the proposed optic system has the temperature resolution of 0.1˚c under the spatial resolution of lmrad, the image matrix size of 256 X 240, and tile imaging time of 4 seconds. 30593 T00401030593 ^x In this paper, MIIS (Medical Image Information System) has been designed and implemented using INGRES RDBMS, which is based on a client/server architecture. The implemented system allows users to register and retrieve patient information, medical images and diagnostic reports. It also provides the function to display these information on workstation windows simultaneously by using the designed menu-driven graphic user interface. The medical image compression/decompression techniques are implemented and integrated into the medical image database system for the efficient data storage and the fast access through the network. 30594 T00401030594 ^x In this paper, computerized BEAM was implemented for the space domain analysis of EEG. Trans-formation from temporal summation to two-dimensional mappings is formed by 4 nearest point inter-polaton method. Methods of representation of BEAM are two. One is dot density method which classify brain electrical potential 9 levels by dot density of gray levels and the other is colour method which classify brain electrical 12 levels by red-green colours. In this BEAM, instantaneous change and average energy distribution over any arbitrary time interval of brain electrical activity could be observed and analyzed easily. In the frequency domain, the distribution of energy spectrum of a special band can easily be distinguished normality and abnormality. 30608 T00401030608 ^x Laboratory information system (LIS) is a key tool to manage laboratory data in clinical pathology. Our department has developed an information system for routine hematology using down-sized computer system. We have used an IBM 486 compatible PC with 16MB main memory, 210 MB hard disk drive, 9 RS-232C port and 24 pin dot printer. The operating system and database management system were SCO UNIX and SCO foxbase, respectively. For program development, we used Xbase language provided by SCO foxbase. The C language was used for interface purpose. To make the system use friendly, pull-down menu was used. The system connected to our hospital information system via application program interface (API), so the information related to patient and request details is automatically transmitted to our computer. Our system interfaced with fwd complete blood count analyzers(Sysmex NE-8000 and Coulter STKS) for unidirectional data tansmission from analyzer to computer. The authors suggests that this system based on down-sized computer could provide a progressive approach to total LIS based on local area network, and the implemented system could serve as a model for other hospital's LIS for routine hematology. 30609 T00401030609 ^x To develop an artificial bone substitute that is gradually degraded and replaced by the regenerated natural bone, the authors designed a composite that is consisted of calcium phosphate and collagen. To use as the structural matrix of the composite, collagen was purified from human umbilical cord. The obtained collagen was treated by pepsin to remove telopeptides, and finally, the immune-free atelocollagen was produced: The cross linked atelocollagen was highly resistant to the collagenase induced collagenolysis. The cross linked collagen demonstrated an improved tensile strength. 30618 T00401030618 ^x This paper is a study on the design of adptive filter for QRS complex detection. We propose a simple adaptive algorithm to increase capability of noise cancelation in QRS complex detection with two stage adaptive filter. At the first stage, background noise is removed and at the next stage, only spectrum of QRS complex components is passed. Two adaptive filters can afford to keep track of the changes of both noise and QRS complex. Each adaptive filter consists of prediction error filter and FIR filter The impulse response of FIR filter uses coefficients of prediction error filter. The detection rates for 105 and 108 of MIT/BIH data base were 99.3% and 97.4% respectively. 30619 T00401030619 ^x To develop an artificial bone substitute that is gradually degraded and replaced by the regenerated natural bone, the authors designed and produced a composite that is consisted of calcium phosphate and collagen. Human umbilical cord origin pepsin treated type I atelocollagen was used as the structural matrix, by which sintered or non-sintered carbonate apatite was encapsulated to form an inorganic-organic composite. With cross linking atelocollagen by UV ray irradiation, the resistance to both compressive and tensile strength was increased. Collagen degradation by the collagenase induced collagenolysis was also decreased. 30620 T00401030620 ^x We have developed a monoleaflet polymer valve as an inexpensive and viable alternative, especially for short-term use in the ventricular assist device or total artificial heart. The frame and leaflet of the polymer valve were made from polyurethane, To evaluate the hemodynamic performance of the polymer valve a comparative study of flow dynamics past a polymer valve and a St. Jude Medical prosthetic valve under physiological pulsatile flow conditions in vitro was made. Comparisons between the valves were made on the transvalvular pressure drop, regurgitation volume and maximum valve opening area. The polymer valve showed smaller regurgitation volume and transvalvular pressure drop compared to the mechanical valve at higher heart rate. The results showed that the functional characteristics of the polymer valve compared favorably with those of the mechanical valve at higher heart rate. 30621 T00401030621 ^x Explosive evaporative removal process of biological tissue by absorption of a CW laser has been simulated by using gelatin and a multimode Nd:YAG laser. Because the point of maximun temperature of laser-irradiated gelatin exists below the surface due to surface cooling, evaporation at the boiling temperature is made explosively from below the surface. The important parameters of this process are the conduction loss to laser power absorption (defined as the conduction-to-laser power parameter, Nk), the convection heat transfer at the surface to conduction loss (defined as Bi), dimensionless extinction coefficient (defined as Br.), and dimensionless irradiation time (defined as Fo). Dependence of Fo on Nk and Bi has been observed by experiment, and the results have been compared with the numerical results obtained by solving a 2-dimensional conduction equation. Fo and explosion depth (from the surface to the point of maximun temperature) are increased when Nk and Bi are increased.To find out the minimum laser power for explosive evaporative removal process, steady state analysis has been also made. The limit of Nk to induce evaporative removal, which is proportional to the inverse of the laser power, has been obtained. 30622 T00401030622 ^x N1 and N2 gross neural action potentials were measured from the round window of the guinea pig cochlea at the onset of the acoustic stimuli. N1-N2 audiograms were made by means of regulating stimulant intensities in order to produce constant N1-N2 potentials as criteria for different input tone pip frequencies. The lowest threshold was measured with an input tone pip I5 dB SPL in intensity and 12 KHz in frequency when the animal was in normal physiological condition. The procedure of experimental measurements is explained in detail. This experimental approach is very useful for the investigation of the Cochlear function. Both noN1inear and active functions of the Cochlea can be monitored by N1-N2 audiograms. 30623 T00401030623 ^x In electrical impedance tomography(EIT), we use boundary current and voltage measurements toprovide the information about the cross-sectional distribution of electrical impedance or resistivity. One of the major problems in EIT has been the inaccessibility of internal voltage or current data in finding the internal impedance values. We propose a new image reconstruction method using internal current density data measured by NMR. We obtained a two-dimensional current density distribution within a phantom by processing the real and imaginary MR images from a 4.77 NMR machine. We implemented a resistivity mage reconstruction algorithm using the finite element method and sensitivity matrix. We presented computer simulation results of the mage reconstruction algorithm and furture direction of the research. 30624 T00401030624 ^x A new method of digital image analysis technique for discrimination of cancer cell was presented in this paper. The object image was the Thyroid eland cells image that was diagnosed as normal and abnormal (two types of abnormal: follicular neoplastic cell, and papillary neoplastic cell), respectively. By using the proposed region segmentation algorithm, the cells were segmented into nucleus. The 16 feature parameters were used to calculate the features of each nucleus. A9 a consequence of using dominant feature parameters method proposed in this paper, discrimination rate of 91.11% was obtained for Thyroid Gland cells. 30625 T00401030625 ^x An electrical stimulator was designed to induce locomotion for paraplegic patients caused by central nervous system injury. Optimal stimulus parameters, which can minimize muscle fatigue and can achieve effective muscle contraction were determined in slow and fast muscles in Sprague-Dawley rats. Stimulus patterns of our stimulator were designed to simulate electromyographic activity monitored during locomotion of normal subjects. Muscle types of the lower extremity were classified according to their mechanical property of contraction, which are slow muscle (msoleus m.) and fast muscle (medial gastrocneminus m., rectus femoris m., vastus lateralis m.). Optimal parameters of electrical stimulation for slow muscles were 20 Hz, 0.2 ms square pulse. For fast muscle, 40 Hz, 0.3 ms square pulse was optimal to produce repeated contraction. Higher stimulus intensity was required when synergistic muscles were stimulated simultaneously than when they were stimulated individually. Electrical stimulation for each muscle was designed to generate bipedal locomotion, so that individual muscles alternate contraction and relaxation to simulate stance and swing phases. Portable electrical stimulator with 16 channels built in microprocessor was constructed and applied to paraplegic patients due to lumbar cord injury. The electrical stimulator restored partially gait function in paraplegic patients. 30626 T00401030626 ^x Two-Dimensional modelling of the Cochlear biomechanics is presented in this paper. The Laplace partial differential equation which represents the fluid mechanics of the Cochlea has been transformed into two-dimensional electrical transmission line. The procedure of this transformation is explained in detail. The comparison between one and two dimensional models is also presented. This electrical modelling of the basilar membrane (BM) is clearly useful for the next approach to the further. Development of active elements which are essential in the producing of the sharp tuning of the BM. This paper shows that two-dimension model is qualitatively better than one-dimensional model both in amplitude and phase responses of the BM displacement. The present model is only for frequency response. However because the model is electrical, the two-dimensional transmission line model can be extended to time response without any difficult. 30627 T00401030627 ^x A method has been proposed for the fully automatic detection of left ventricular endocardial boundary in 2D short axis echocardiogram using geometric model. The procedure has the following three distinct stages. First, the initial center is estimated by the initial center estimation algorithm which is applied to decimated image. Second, the center estimation algorithm is applied to original image and then best-fit elliptic model estimation is processed. Third, best-fit boundary is detected by the cost function which is based on the best-fit elliptic model. The proposed method shows effective result without manual intervention by a human operator. 30628 T00401030628 ^x The intelligent trajectory control method that controls moving direction and average velocity for a prosthetic arm is proposed by pattern recognition and force estimations using EMG signals. Also, we propose the real time trajectory planning method which generates continuous accelleration paths using 3 stage linear filters to minimize the impact to human body induced by arm motions and to reduce the muscle fatigue. We use combination of MLP and fuzzy filter for pattern recognition to estimate the direction of a muscle and Hogan's method for the force estimation. EMG signals are acquired by using a amputation simulator and 2 dimensional joystick motion. The simulation results of proposed prosthetic arm control system using the EMf signals show that the arm is effectively followed the desired trajectory depended on estimated force and direction of muscle movements. 30638 T00401030638 ^x A new neural network architecture for the recognition of patterns from images is proposed, which is partially based on the results of physiological studies. The proposed network is composed of multi-layers and the nerve cells in each layer are connected by spatial filters which approximate receptive fields in optic nerve fields. In the proposed method, patterns recognition for complicated images is carried out using global features as well as local features such as lines and end-points. A new generating method of matched filers representing global features is proposed in this network. 30659 T00401030659 ^x An implementation scheme of the magnetic nerve stimulator using a switching mode power supply is proposed. By using a switching mode power supply rather than a conventional linear power supply for charging high voltage capacitors, the weight and size of the magnetic nerve stimulator can be considerably reduced. Maximum output voltage of the developed magnetic nerve stimulator using the switching mode power supply is 3, 000 volts and switching time is about 100 msec. Experimental results or human nerve stimulations using the developed stimulator are presented. 30768 T00401030768 ^x In this paper, we describe the design methodology and specifications of the developed module-based bedside monitors for patient monitoring. The bedside monitor consists of a main unit and module cases with various parameter modules. The main unit includes a 12.1" TFT color LCD, a main CPU board, and peripherals such as a module controller, Ethernet LAN card, video card, rotate/push button controller, etc. The main unit can connect at maximum three module cases each of which can accommodate up to 7 parameter modules. They include the modules for electrocardiograph, respiration, invasive blood pressure, noninvasive blood pressure, temperature, and SpO2 with Plethysmograph.SpO2 with Plethysmograph.

  • PDF