• Title/Summary/Keyword: Dynamic Performance

Search Result 8,289, Processing Time 0.035 seconds

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

An Analytical Validation of the GenesWellTM BCT Multigene Prognostic Test in Patients with Early Breast Cancer (조기 유방암 환자를 위한 다지표 예후 예측 검사 GenesWellTM BCT의 분석적 성능 시험)

  • Kim, Jee-Eun;Kang, Byeong-il;Bae, Seung-Min;Han, Saebom;Jun, Areum;Han, Jinil;Cho, Min-ah;Choi, Yoon-La;Lee, Jong-Heun;Moon, Young-Ho
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.49 no.2
    • /
    • pp.79-87
    • /
    • 2017
  • GenesWell$^{TM}$ BCT is a 12-gene test suggesting the prognostic risk score (BCT Score) for distant metastasis within the first 10 years in early breast cancer patients with hormone receptor-positive, HER2-negative, and pN0~1 tumors. In this study, we validated the analytical performance of GenesWell$^{TM}$ BCT. Gene expression values were measured by a one-step, real-time qPCR, using RNA extracted from FFPE specimens of early breast cancer patients. Limit of Blank, Limit of Detection, and dynamic range for each of the 12 genes were assessed by serially diluted RNA pools. The analytical precision and specificity were evaluated by three different RNA samples representing low risk group, high risk group, and near-cutoff group in accordance with their BCT Scores. GenesWell$^{TM}$ BCT could detect gene expression of each of the 12 genes from less than $1ng/{\mu}L$ of RNA. Repeatability and reproducibility across multiple testing sites resulted in 100% and 98.3% consistencies of risk classification, respectively. Moreover, it was confirmed that the potential interference substances does not affect the risk classification of the test. The findings demonstrate that GenesWell$^{TM}$ BCT have high analytical performance with over 95% consistency for risk classification.

Glass Dissolution Rates From MCC-1 and Flow-Through Tests

  • Jeong, Seung-Young
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2004.06a
    • /
    • pp.257-258
    • /
    • 2004
  • The dose from radionuclides released from high-level radioactive waste (HLW) glasses as they corrode must be taken into account when assessing the performance of a disposal system. In the performance assessment (PA) calculations conducted for the proposed Yucca Mountain, Nevada, disposal system, the release of radionuclides is conservatively assumed to occur at the same rate the glass matrix dissolves. A simple model was developed to calculate the glass dissolution rate of HLW glasses in these PA calculations [1]. For the PA calculations that were conducted for Site Recommendation, it was necessary to identify ranges of parameter values that bounded the dissolution rates of the wide range of HLW glass compositions that will be disposed. The values and ranges of the model parameters for the pH and temperature dependencies were extracted from the results of SPFT, static leach tests, and Soxhlet tests available in the literature. Static leach tests were conducted with a range of glass compositions to measure values for the glass composition parameter. The glass dissolution rate depends on temperature, pH, and the compositions of the glass and solution, The dissolution rate is calculated using Eq. 1: $rate{\;}={\;}k_{o}10^{(ph){\eta})}{\cdot}e^{(-Ea/RT)}{\cdot}(1-Q/K){\;}+{\;}k_{long}$ where $k_{0},\;{\eta}$ and Eaare the parameters for glass composition, pH, $\eta$ and temperature dependence, respectively, and R is the gas constant. The term (1-Q/K) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_{0},\;{\eta}\;and\;E_{a}$ are the parameters for glass composition, pH, and temperature dependence, respectively, and R is the gas constant. The term (1-Q/C) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_0$, and Ea are determined under test conditions where the value of Q is maintained near zero, so that the value of the affinity term remains near 1. The dissolution rate under conditions in which the value of the affinity term is near 1 is referred to as the forward rate. This is the highest dissolution rate that can occur at a particular pH and temperature. The value of the parameter K is determined from experiments in which the value of the ion activity product approaches the value of K. This results in a decrease in the value of the affinity term and the dissolution rate. The highly dilute solutions required to measure the forward rate and extract values for $k_0$, $\eta$, and Ea can be maintained by conducting dynamic tests in which the test solution is removed from the reaction cell and replaced with fresh solution. In the single-pass flow-through (PFT) test method, this is done by continuously pumping the test solution through the reaction cell. Alternatively, static tests can be conducted with sufficient solution volume that the solution concentrations of dissolved glass components do not increase significantly during the test. Both the SPFT and static tests can ve conducted for a wide range of pH values and temperatures. Both static and SPFt tests have short-comings. the SPFT test requires analysis of several solutions (typically 6-10) at each of several flow rates to determine the glass dissolution rate at each pH and temperature. As will be shown, the rate measured in an SPFt test depends on the solution flow rate. The solutions in static tests will eventually become concentrated enough to affect the dissolution rate. In both the SPFt and static test methods. a compromise is required between the need to minimize the effects of dissolved components on the dissolution rate and the need to attain solution concentrations that are high enough to analyze. In the paper, we compare the results of static leach tests and SPFT tests conducted with simple 5-component glass to confirm the equivalence of SPFT tests and static tests conducted with pH buffer solutions. Tests were conducted over the range pH values that are most relevant for waste glass disssolution in a disposal system. The glass and temperature used in the tests were selected to allow direct comparison with SPFT tests conducted previously. The ability to measure parameter values with more than one test method and an understanding of how the rate measured in each test is affected by various test parameters provides added confidence to the measured values. The dissolution rate of a simple 5-component glass was measured at pH values of 6.2, 8.3, and 9.6 and $70^{\circ}C$ using static tests and single-pass flow-through (SPFT) tests. Similar rates were measured with the two methods. However, the measured rates are about 10X higher than the rates measured previously for a glass having the same composition using an SPFT test method. Differences are attributed to effects of the solution flow rate on the glass dissolution reate and how the specific surface area of crushed glass is estimated. This comparison indicates the need to standardize the SPFT test procedure.

  • PDF

Modeling and Intelligent Control for Activated Sludge Process (활성슬러지 공정을 위한 모델링과 지능제어의 적용)

  • Cheon, Seong-pyo;Kim, Bongchul;Kim, Sungshin;Kim, Chang-Won;Kim, Sanghyun;Woo, Hae-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.10
    • /
    • pp.1905-1919
    • /
    • 2000
  • The main motivation of this research is to develop an intelligent control strategy for Activated Sludge Process (ASP). ASP is a complex and nonlinear dynamic system because of the characteristic of wastewater, the change in influent flow rate, weather conditions, and etc. The mathematical model of ASP also includes uncertainties which are ignored or not considered by process engineer or controller designer. The ASP is generally controlled by a PID controller that consists of fixed proportional, integral, and derivative gain values. The PID gains are adjusted by the expert who has much experience in the ASP. The ASP model based on $Matlab^{(R)}5.3/Simulink^{(R)}3.0$ is developed in this paper. The performance of the model is tested by IWA(International Water Association) and COST(European Cooperation in the field of Scientific and Technical Research) data that include steady-state results during 14 days. The advantage of the developed model is that the user can easily modify or change the controller by the help of the graphical user interface. The ASP model as a typical nonlinear system can be used to simulate and test the proposed controller for an educational purpose. Various control methods are applied to the ASP model and the control results are compared to apply the proposed intelligent control strategy to a real ASP. Three control methods are designed and tested: conventional PID controller, fuzzy logic control approach to modify setpoints, and fuzzy-PID control method. The proposed setpoints changer based on the fuzzy logic shows a better performance and robustness under disturbances. The objective function can be defined and included in the proposed control strategy to improve the effluent water quality and to reduce the operating cost in a real ASP.

  • PDF

A Kinematical Analysis of Belle Motion on Parallel Bars (평행봉 Belle 기술동작의 운동학적 분석)

  • Kong, Tae-Ung
    • Korean Journal of Applied Biomechanics
    • /
    • v.15 no.4
    • /
    • pp.43-53
    • /
    • 2005
  • This study is to define how the difference of athletic change influence on the last regrasp after somersault in Belle movement of parallel bars. For his study, the following conclusion was produced by analysis of athletic change by means of three dimensional visual image in three athlete of nation. 1. As the picture of S1, there are total used time(2.01 sec), S3(2.17 sec) and S2(2.19 sec). In case of a short needed time, it is difficult for them to perform the remaining movement of the vertical elevating flight easily and comfortably, it is judged as performing the small movement with restrict swing. 2 In the change of body center sped by each event, it is calculated as $-89.1^{\circ}$ the narrowest in S1, $-81.96^{\circ}$ the widest and then $86.34^{\circ}$ in S3. In E3 event, average compound speed is 4.07m/s, S2 showed the fastest speed of 4.14m/s whereas S1 the narrowest angle of 3.95m/s. 3. A shoulder joint and coxa are the period of mention in E3. In E4 which was pointed out the longest vertical distance, S2 that is indicated the highest vertical height as the period of detach in parallel bars. showed -3.91m. This is regarded as a preparatory movement for dynamic performance after using effectively elastic movement of shoulder joint and coxa while easily going up with turning back movement. In the 5th phrase, long airborne time and vertical change position is showed as the start while regrasping securely air flight movement from high position. 4. In E5, a long flight time and a long vertical displacement were shown as the regrasp after somersault efficiently in high position with stability from the point of the highest peak of the center of the body. Especially, S2 is marked as a little bit long position, while S1 is reversely indicated as performing somersault and unstable motion in a low position. 5. In E3, at the point of the largest extension of the shoulder joint and hip joint the shoulder joint is largely marked in $182^{\circ}$ and the hip point $182^{\circ}$ in S2. The shoulder joint is marked at the smallest angle in $177^{\circ}$ and the hip point $176^{\circ}$ in S1. And S1 is being judged by its performance of the less self - confident motion with lessening a breath of swing. S2 makes the most use of flexion and extension of the shoulder joint and the hip joint effectively. It was performed greatly with swinging and dropping the rotary movement and the rotary inertia naturally. 6. In E6, as the point of regrasp of the upper arm in parallel bars it is recognized by the that of components of vertical and horizontal velocity stably. During this study, the insufficient thing and the study on the parallel bars at a real game later are more activated than now. If it is really used as the basic materials by means of Belle Picked Study of Super E level after Bell movement, you may perceive the technique movement previously and perform without difficulty. Especially, such technique as crucifix is quite advantageous for oriental people thanks to small body shape condition. In conclusion we will nicely prepare for our suitable environment to gradually lessen trials and errors by analyzing and studying kinematically this movement.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

The Effects of Environmental Dynamism on Supply Chain Commitment in the High-tech Industry: The Roles of Flexibility and Dependence (첨단산업의 환경동태성이 공급체인의 결속에 미치는 영향: 유연성과 의존성의 역할)

  • Kim, Sang-Deok;Ji, Seong-Goo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.31-54
    • /
    • 2007
  • The exchange between buyers and sellers in the industrial market is changing from short-term to long-term relationships. Long-term relationships are governed mainly by formal contracts or informal agreements, but many scholars are now asserting that controlling relationship by using formal contracts under environmental dynamism is inappropriate. In this case, partners will depend on each other's flexibility or interdependence. The former, flexibility, provides a general frame of reference, order, and standards against which to guide and assess appropriate behavior in dynamic and ambiguous situations, thus motivating the value-oriented performance goals shared between partners. It is based on social sacrifices, which can potentially minimize any opportunistic behaviors. The later, interdependence, means that each firm possesses a high level of dependence in an dynamic channel relationship. When interdependence is high in magnitude and symmetric, each firm enjoys a high level of power and the bonds between the firms should be reasonably strong. Strong shared power is likely to promote commitment because of the common interests, attention, and support found in such channel relationships. This study deals with environmental dynamism in high-tech industry. Firms in the high-tech industry regard it as a key success factor to successfully cope with environmental changes. However, due to the lack of studies dealing with environmental dynamism and supply chain commitment in the high-tech industry, it is very difficult to find effective strategies to cope with them. This paper presents the results of an empirical study on the relationship between environmental dynamism and supply chain commitment in the high-tech industry. We examined the effects of consumer, competitor, and technological dynamism on supply chain commitment. Additionally, we examined the moderating effects of flexibility and dependence of supply chains. This study was confined to the type of high-tech industry which has the characteristics of rapid technology change and short product lifecycle. Flexibility among the firms of this industry, having the characteristic of hard and fast growth, is more important here than among any other industry. Thus, a variety of environmental dynamism can affect a supply chain relationship. The industries targeted industries were electronic parts, metal product, computer, electric machine, automobile, and medical precision manufacturing industries. Data was collected as follows. During the survey, the researchers managed to obtain the list of parts suppliers of 2 companies, N and L, with an international competitiveness in the mobile phone manufacturing industry; and of the suppliers in a business relationship with S company, a semiconductor manufacturing company. They were asked to respond to the survey via telephone and e-mail. During the two month period of February-April 2006, we were able to collect data from 44 companies. The respondents were restricted to direct dealing authorities and subcontractor company (the supplier) staff with at least three months of dealing experience with a manufacture (an industrial material buyer). The measurement validation procedures included scale reliability; discriminant and convergent validity were used to validate measures. Also, the reliability measurements traditionally employed, such as the Cronbach's alpha, were used. All the reliabilities were greater than.70. A series of exploratory factor analyses was conducted. We conducted confirmatory factor analyses to assess the validity of our measurements. A series of chi-square difference tests were conducted so that the discriminant validity could be ensured. For each pair, we estimated two models-an unconstrained model and a constrained model-and compared the two model fits. All these tests supported discriminant validity. Also, all items loaded significantly on their respective constructs, providing support for convergent validity. We then examined composite reliability and average variance extracted (AVE). The composite reliability of each construct was greater than.70. The AVE of each construct was greater than.50. According to the multiple regression analysis, customer dynamism had a negative effect and competitor dynamism had a positive effect on a supplier's commitment. In addition, flexibility and dependence had significant moderating effects on customer and competitor dynamism. On the other hand, all hypotheses about technological dynamism had no significant effects on commitment. In other words, technological dynamism had no direct effect on supplier's commitment and was not moderated by the flexibility and dependence of the supply chain. This study makes its contribution in the point of view that this is a rare study on environmental dynamism and supply chain commitment in the field of high-tech industry. Especially, this study verified the effects of three sectors of environmental dynamism on supplier's commitment. Also, it empirically tested how the effects were moderated by flexibility and dependence. The results showed that flexibility and interdependence had a role to strengthen supplier's commitment under environmental dynamism in high-tech industry. Thus relationship managers in high-tech industry should make supply chain relationship flexible and interdependent. The limitations of the study are as follows; First, about the research setting, the study was conducted with high-tech industry, in which the direction of the change in the power balance of supply chain dyads is usually determined by manufacturers. So we have a difficulty with generalization. We need to control the power structure between partners in a future study. Secondly, about flexibility, we treated it throughout the paper as positive, but it can also be negative, i.e. violating an agreement or moving, but in the wrong direction, etc. Therefore we need to investigate the multi-dimensionality of flexibility in future research.

  • PDF

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

A 10b 50MS/s Low-Power Skinny-Type 0.13um CMOS ADC for CIS Applications (CIS 응용을 위해 제한된 폭을 가지는 10비트 50MS/s 저 전력 0.13um CMOS ADC)

  • Song, Jung-Eun;Hwang, Dong-Hyun;Hwang, Won-Seok;Kim, Kwang-Soo;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.5
    • /
    • pp.25-33
    • /
    • 2011
  • This work proposes a skinny-type 10b 50MS/s 0.13um CMOS three-step pipeline ADC for CIS applications. Analog circuits for CIS applications commonly employ a high supply voltage to acquire a sufficiently acceptable dynamic range, while digital circuits use a low supply voltage to minimize power consumption. The proposed ADC converts analog signals in a wide-swing range to low voltage-based digital data using both of the two supply voltages. An op-amp sharing technique employed in residue amplifiers properly controls currents depending on the amplification mode of each pipeline stage, optimizes the performance of op-amps, and improves the power efficiency. In three FLASH ADCs, the number of input stages are reduced in half by the interpolation technique while each comparator consists of only a latch with low kick-back noise based on pull-down switches to separate the input nodes and output nodes. Reference circuits achieve a required settling time only with on-chip low-power drivers and digital correction logic has two kinds of level shifter depending on signal-voltage levels to be processed. The prototype ADC in a 0.13um CMOS to support 0.35um thick-gate-oxide transistors demonstrates the measured DNL and INL within 0.42LSB and 1.19LSB, respectively. The ADC shows a maximum SNDR of 55.4dB and a maximum SFDR of 68.7dB at 50MS/s, respectively. The ADC with an active die area of 0.53$mm^2$ consumes 15.6mW at 50MS/s with an analog voltage of 2.0V and two digital voltages of 2.8V ($=D_H$) and 1.2V ($=D_L$).