Research Output of the Pakistani Library and Information Science Authors: A Bibliometric Evaluation of Their Impact

  • Received : 2017.04.30
  • Accepted : 2017.05.24
  • Published : 2017.05.31


This paper uses 601 cited papers of Pakistani LIS researchers with the purpose to examine the individual performance of these Library and Information Science (LIS) researchers in terms of their research output and its impact on the LIS (national/international) literature by using various bibliometric indicators. A list of 139 authors was compiled with the help of the Library, Information Science, and Technology Abstracts (LISTA) and some other sources. Data were collected from Google Scholar and SPSS version 20 was utilized in order to identify the relationship between self-citations and various performance indices of the authors. The average citations received per paper vary from 1.80 to 10.08. About half of the papers were single-authored whereas less than one-fifth were by three or more authors. The authors who worked in collaboration produced more papers and received more citations. The h-index, g-index, hI-index, hI-norm, and e-index were used to determine the rank for each author. The intra-group citations grid revealed the volume of self-citations and a small group who cite each other more due to close academic and social relationships. The correlations between self-citations and the impact indices used revealed significant differences. Findings are useful for concerned institutions regarding award, promotions, etc. Further, future research should seriously consider the self-citations and social networking of authors while examining their citations-based research performance.



Research scholars conduct scientific investigations of their field’s problems and then communicate the procedures used and the findings with their colleagues. The most authentic and widespread medium for communication of such information is the scholarly journal. These journals disseminate the results of each study, which provides a base for further research. Scholarly journals are considered as primary sources for calculating various metrics regarding the research output and impact of researchers, journals, institutions, and countries. The impact of the research output of an individual can be defined as “the degree to which it has been useful to other researchers and users in generating further research and applications” (Shadbolt et al., 2006, p. 202).

Research scholars, by using each others work as a practice, acknowledge each other by means of proper citations. These “citations are meant to show that a publication has made use of the contents of other publications” (Bornmann et al., 2008, p. 93). A publication with relatively higher numbers of citations received is generally considered as excellent because the received citations have been used as a measure of quality (Jan & Anwar, 2013; Repanovici, 2011; Sife & Lwoga, 2014). These are also considered as indicators of the impact of that particular publication’s author(s). So, the well-cited work of a researcher means that he or she has made a significant possible impact on the research of his/her field (Harzing & Wal, 2008).

The body of research-based knowledge is increasing all over the world, including Pakistan. Bashir (2013), using data from SCImago ( obtained from Scopus, reported that a total of 893 papers were published from Pakistan in 1996 which came to 0.08 per cent of the world’s scientific output. In 2010, this number increased to 6,987, with a growth of 682.42 per cent, increasing Pakistan’s share to 0.32 per cent. SCImago identified only seven Pakistani documents in 1996 in the field of Library and Information Science (LIS), which increased to 312 in 2016 in the Scopus database. The quality of Pakistani LIS research is generally low. However, the productivity rate is satisfactory when observing citations from other databases (e.g. Google Scholar).

In order to determine the impact of literature of any field we need to examine its various bibliometric indicators. Bibliometrics provides useful tools for benchmarking the research output of a researcher, an institution, or a country, and for measuring their post-publication impact (Ali & Richardson, 2016). The citation-based bibliometric analysis of publications of a researcher can be employed as a useful and powerful indicator for studying its influence (Borgman & Furner, 2002; Weingart, 2005). Such analysis at an individual level can help understand the research process in order to support decisions on recruitment, promotion, recognition, awards, etc. (Costas et al., 2010; Jan & Anwar, 2013). This type of analysis can also be used for the comparison of the research performance of individual researchers. However, such comparison should be conducted among a research group of the same research field in order to have a better understanding of their research performance (Adam, 2002; Bornmann et al., 2008).



A number of studies have tried to assess the impact of researchers by using various approaches with citation analysis as the most commonly and widely used one. Some closely related studies are reviewed here.

Jacso (2009) discussed various bibliometric indicators calculated through Publish or Perish software. The author concluded that it was an elegant tool which provided essential, compact, and efficient output in terms of various bibliometric indicators for measuring the impact of authors and journals. The impact of the researchers at an individual level was usually assessed within the same academic field. For example, Cronin and Meho (2006) ranked 31 Information Science faculty members from the United States on the basis of their h-index scores. Razzaque and Wilkinson (2007) utilized Publish or Perish for assessment of the research performance of senior marketing academics of Australian universities. They used four impact indices, i.e., h-index, g-index, hc-index, and hI-norm. Meho and Yang (2007) examined the research impact of 25 Library and Information Science faculty utilizing Scopus, Google Scholar, and Web of Science. Using the same databases, Bar-Ilan (2008) compared the h-index scores of 40 Israeli researchers. These studies concluded that Google Scholar was a useful citation retrieval tool due to its wide coverage of documents.

McKercher (2008) identified the most frequently cited tourism scholars by using Google Scholar and Publish or Perish [PoP] and suggested that Google Scholar and PoP could provide an alternative means to develop impact scores for scholars. McCallum (2010), using PoP, observed the citation ratings of herpetologists and noted that the h-index and g-index of the authors increased with the increase in their career length. With the help of various bibliometric indicators calculated via PoP, Khey et al. (2011) re-ranked the top female academic “stars” in the field of criminology and criminal justice. Using the same software, Long, Boggess, and Jennings (2011) re-ranked the top 10 academicians in the field of criminology and criminal justice who were identified and ranked in an earlier study. Similarly, Repanovici (2011), using PoP, analyzed the 2008 research performance of the professors at a Romanian university via different impact indices and concluded that the citations received by the work of a researcher was a qualitative indicator of his/her work.

In Pakistan, the individual contribution of two LIS scholars named Anis Khurshid and Khalid Mahmood have been studied in terms of various bibliometric characteristics (Mahmood & Rehman, 2009; Qayyum & Naseer, 2013). General bibliographic indicators such as publication type, publication sources, and authorship pattern were studied. However, the research impact of these scholars was not examined. Jan and Anwar (2013) studied the research impact of 53 Pakistani Library and Information Science faculty members. They retrieved the data from Google Scholar via PoP. Various impact indices (i.e. h-index, g-index, hc-index, hI-norm, and e-index) were used to determine the impact of these 53 authors. In another study, Khurshid (2013) examined the quality of research of Pakistani LIS authors. He used the number of articles published in JCR-ranked journals and the impact factor scores of the host journals. He ignored the other indicators that measure quality/impact of literature (e.g. impact indices, total number of citations, etc.). Ali and Richardson (2016) bibliometrically analyzed the research publishing patterns of Pakistani LIS authors. The authors did not study individual research impacts of the scholars. However, they highlighted various areas for future research in order to have an in-depth understanding of the research profile of Pakistani LIS scholars. The present study is an endeavor in that direction.

The published literature includes various indicators used for measuring research performance or comparing individual researchers. These include: h-index (Hirsch, 2005), g-index (Egghe, 2006), e-index (Zhang, 2009), hI-norm, total number of publications, and total received citations (Harzing, 2007). However, due to some inherent complications, it is very difficult to obtain a single precise standard for measuring the research performance of individual researchers (Costas et al., 2010
Macri & Sinha, 2006). This study was designed to examine and identify the impact of Pakistani LIS scholars’ research output by using several bibliometric indicators of their research to obtain a reasonable picture of their research impact/performance. The following objectives were framed to achieve that purpose:

  • To identify the most productive Pakistani authors in terms of their cited documents;
  • To identify the most-cited Pakistani authors;
  • To measure the performance of these authors on the basis of various impact indices;
  • To point out the phenomenon of citing each other and self-citing among these authors;
  • To determine the relationship between the total cited papers, total received citations, self-citations, and various performance indices of these authors.



3.1 Sample of the Study

This study was limited to Pakistani LIS researchers. The Library, Information Science, and Technology Abstracts (LISTA) was searched on August 11, 2013 to identify and compile a list of the names of Pakistani LIS researchers using ‘Pakistan’ and ‘librar*’ as search terms. A total of 387 hits were received. Some of the unrelated citations and small items like news and editorials were ignored. Articles written by foreigners as single authors were excluded. However, those papers were included which were written as a result of collaboration between Pakistani and foreign writers. The authors from the rest of the citations were noted. All authors from the papers of Z. Khurshid (2013) and Anwar and Saeed (1999) were added to the list. Some authors personally known to the writers, e.g. Abdul Moid, Abdus Subbuh Qasimi, and M. Adil Usmani were also included. This process resulted in a list of 139 authors. 


3.2 Data Collection Source

In general, the Web of Science (WoS), Scopus, and Google Scholar databases are the three major sources for studying the citation characteristics of individual researchers and scholarly journals. However, some academic fields may have wide coverage in one of these three databases but not in all. For example, Meho and Yang (2007) compared Web of Science, Scopus, and Google Scholar on the basis of citations of 1,000 scholarly works of 15 faculty members of a School of Library and Information Science. They concluded that Google Scholar, as compared to Web of Science and Scopus, was very helpful to retrieve a significant number of citations which could be useful evidence of researchers’ broader intellectual and international impact. Harzing and Wal (2008) found that the areas of management and international business had comprehensive citation coverage in Google Scholar. Therefore, they recommended the use of Google Scholar-based citation metrics for evaluating the impact of individual researchers, particularly in these areas. Similarly, Harzing and her colleagues have concluded many times that Google Scholar has a wider coverage, higher research metrics, and better aggregating mechanisms than Web of Science and Scopus and provides a comprehensive snapshot of research impact at individual levels, especially for social sciences (Harzing, 2013; 2014; Harzing & Alakangas, 2016). Thus, because of free accessibility, wider coverage of LIS literature, and better citation indexing, Google Scholar was chosen to be the source of data for this study. 


3.3 Data Collection Tool

The Publish or Perish software was used for data collection from Google Scholar. This software retrieves raw citations from Google Scholar and then through analysis “presents a wide range of citation metrics in a user-friendly format” (Harzing & Wal, 2008, p. 61). PoP was developed by Tarma Software Research ( with input from Harzing and is available free of charge for personal and research use (www. (Harzing & Wal, 2008). The retrieved results can be copied to other tools like Microsoft Excel and Microsoft Word for further analysis This software has been utilized in a number of previous studies (e.g. Jacso, 2009; Razzaque & Wilkinson, 2007; Repanovici, 2011) for citation analysis. 


3.4 Data Collection Procedures

Citation analysis was performed with the help of PoP for all the identified 139 authors individually. This analysis identified 68 authors, out of the 139, whose works had received citations. Of these 68 authors, only 21 had five or more cited papers. The data for these 21 authors were collected during the last ten days of August 2013 by using PoP version and were used for further analysis. The retrieved metrics via PoP for each author included the total number of cited publications, total citations, average citations per paper, h-index, g-index, and the hI-norm. The data for each author were copied to Microsoft Word for further manual analysis, which was a tedious and time consuming exercise. Only one homonym (i.e. Khalid Mahmood) was found during data collection, whose publications were identified and duplications removed before obtaining the final data. In case of any doubt regarding the authenticity of a publication repeated searches were done via Google Scholar for confirmation. Also, in many cases citing sources were verified and duplicate entries were removed. SPSS version 20 was utilized in order to identify the relationship between self-citations and various performance indices of the authors. 


The following sections present the results and discussion with six tables displaying statistical data.

Table 1 provides information about the 21 cited authors, their cited papers, and the data related to their citations. These authors collectively contributed 601 papers which received a total of 3,851 citations, with an average of 6.4 citations per paper. The papers of Shaheen Majid received a maximum number of citations (635), followed by Khalid Mahmood (594), Abdus Sattar Chaudhry (477), and Mumtaz Ali Anwar (408). The mean citations per paper ranged from a low of 1.80 to a high of 10.08. Ranked by mean citations, Shaheen Majid was the first with 10.08, followed by Muhammad Ramzan with 9.80, and Mumtaz Anwar with 8.33. The lowest number of mean citations was received by Shafiq-ur-Rehman (1.83) and Midrar Ullah (1.80). These results are comparable to the published literature. Sife and Lwoga (2014), who used Google Scholar and PoP software, studied the 12 most productive librarians from Tanzania who published 258 papers, receiving a total of 1,058 citations (an average of 4.10 citations per paper). Their mean citations ranged from 1.76 to 6.86. The situation regarding the productivity and citations of 
Pakistani authors seems better.

The data regarding the authorship pattern of the cited papers (Table 2) clearly indicate that most of the papers (n=300, 49.92%) were single-authored, followed by two-authored papers (n=186, 30.95%). The ratio of four, five, and six-authored papers was very low. The ratio of two-authored papers of six authors (Khalid Mahmood, Shaheen Majid, Abdus Sattar Chaudhry, Sajjad-ur-Rehman, Farzana Shafique, and Midrar Ullah) was higher than their papers in any other authorship pattern.  


Table 1. Authors with their Cited Papers and Received Citations (Arranged according to citations received)


The rest of the authors had mostly single-authored papers. These 601 cited papers were almost equally divided between single- and multi-authored. The prolific and mostly cited authors’ publications were generally found in joint authorship. It is important to note that a very small number of papers (n=08, 1.33%) were five- and 

The earlier literature has also shown mixed trends regarding the authorship pattern in LIS research. For example, Singh (2009), who studied Indian IT and LIS literature, found that most of the papers (60.39%) were single-authored. The two-authored papers were 30.30 per cent and three-authored 9.31 per cent. Yazit and Zainab (2007) studied various bibliometric aspects of the literature produced by Malaysian LIS authors during 1965-2005. Their findings revealed that 76.93 per cent of the papers were single-authored, 19.14 per cent two-authored, and 3.92 per cent three-authored. Similarly, single LIS journal studies have shown the dominance of single-authorship literature (e.g. Hussain & Fatima, 2010; Swain, 2011). It seems that the authorship pattern of Pakistani LIS literature is similar to the world literature. 

There are many indices that determine the impact or research performance of individual researchers. Among these the h-index and g-index are mostly used in the literature (Egghe, 2006; Hirsch, 2005; Saad, 2006). The present study has reported the values of different indices for the 21 authors individually (Table 3). As indicated, 
Shaheen Majid received the highest ‘h-index’ value of 16, meaning that sixteen of his publications were cited sixteen or more times each and the rest had less than sixteen each (Hirsch, 2005). He received a ‘g-index’ value of 21 when more weight was given to his highly cited publications (Egghe, 2006). When the excess citations 
from his highly cited publications were considered, he obtained an e-index of 11.66. The e-index discriminates between the individual researchers with the same h-index scores but different citation patterns (Zhang, 2009). The other highest values on ‘h-index’ were received by Khalid Mahmood (13), followed by Mumtaz Ali Anwar (12). On g-index, Abdus Sattar Chaudhry received the second highest value (18), followed by Khalid Mahmood (17). 

The hI-index was calculated when standard h-index score was divided by the average number of authors in the publications that contributed to the h-index. The purpose of this index is to reduce the effects of co-authorship (Batista et al., 2006). The hI-norm was calculated when the number of citations for each paper was 
normalized and then h-index was calculated. As compared to the hI-index, the hI-norm is more accurate regarding the effects of co-authorship (Harzing, 2007). On 
both these indicators, Khalid Mahmood ranked first. On hI-index, S. J. Haider ranked second with the value of 8.10, followed by Anis Khurshid with the value of 7. And on hI-norm, the second position was occupied by Shaheen Majid, Mumtaz Anwar, and S. J. Haider, with the third position going to Abdus Sattar Chaudhry and Sajjad ur Rehman. Abdus Sattar Chaudhry ranked second on e-index, followed by Mumtaz Anwar. 

A very interesting phenomenon of intra-group citing (self and each other) among the 21 researchers covered in this study is presented in Table 4. This grid is designed in a way that the top row of the table lists the citing authors whereas the extreme left column lists the cited authors. Self-citations are shown diagonally in bold from the top left corner to bottom right corner. The key to the authors is given below the table showing his/her serial number and name with the number of papers in brackets. 

It is normal for researchers to cite their earlier works while producing a new one, assuming that their content is really related. These citations are called self-citations 
meaning that the author(s) of the cited and citing paper are not disjoint. Sometimes this practice makes a substantial contribution to the total number of citations of the researcher concerned (Hyland, 2003; Snyde & Bonzi, 1998). The present study identified 452 (11.74%) self-citations out of a total of 3,851, which confirms the trend in the earlier literature. However, self-citations can be a serious problem while using citations as indicators of individual research impact (Aksnes, 2003). Their excessive presence is likely to distort the values of bibliometric indices in favor of the author who indulges in self-citation more than necessary. This practice raises a question as to what extent self-citations are acceptable. 

Among the top four highly cited authors of the 21 included in the present study, the author at serial number 2 (Table 4) who received 6.67 citations per paper used 
the highest percentage (14.14) of self-citations whereas the author at serial number 4 (Table 4) who received 8.22 citations per paper used the lowest percentage (4.66) of self-citations. Similarly, the author at serial number 3 


Table 2. Authorship Pattern in Cited Papers (Arranged by total papers)


with 8.22 citations per paper had 7.76 per cent self-citations and the author at serial number 1 with 10.08 citations per paper had 7.72 per cent self-citations. Overall, the author at serial number 19 had the highest percentage of self-citations (53.85%), followed by the author at serial number 20 (36.36%), and the author at serial number 13 (31.37%). Only the author at serial number 16 had zero self-citations. 

It is apparent from previous studies that authors having strong academic (student-teacher) and social relations frequently cite each other, leading to a greater mutual exchange of citations (Cronin, 2005; White, 


Table 3. Research Impact of Authors (arranged by h-index score; numbers in brackets indicate ranking)


2001). These personal ties strengthen the reciprocal use of citations, sometimes legitimately, sometimes not. Student-teacher relationships resulting in more citations is evident from the cases of S. J. Haider with Khalid Mahmood (76 citations) and Kanwal Amin (25 citations), and Abdul Moid with S. J. Haider (11 citations). Farzana Shafique gives Khalid Mahmood 37 citations and Khalid Mahmood gives her 16. Similarly, Sajjad ur Rehman gives 19 citations to Abdus Sattar Chaudhry and 14 to Shaheen Majeed (Table 4). We personally know that some faculty members / senior professionals tell their students / junior colleagues to add citations for some and exclude citations for others due to personal relations or professional rivalries. However, concrete examples cannot be given due to the sensitivity of the matter. The editorial board of a local LIS journal testifies to this practice (Sabzwari 2015 p 1).

Based on our personal experience, a new phenom-enon has emerged recently due to the pressures of academic requirements for publishing in impact factor journals and having higher citations. Several local writers have commented on this issue. Rizwan and Saadullah (2009) have coined the term ‘Impactomania’ for the quest for seeking higher impact values. Anees and Iraj (2015) go a step further, calling the situation ‘impact factor, fad or fallacy.’ Another phenomenon of recent origin has sprung up from academic and professional rivalries, not necessarily visible on the surface. We have personally observed that some writers intentionally do not cite papers of some other authors. The worst example is a case where the author cites another author twice in the text while giving only his name in one and name and year in the other reference (Bhatti, 2008), thus making sure that the citation will not be picked up. The situation is further complicated by the addition of fake authors as revealed by the chief editor of a Pakistani journal. He laments that “it is observed that some names in the article are dummy just to favor someone or the teacher/guide or a … supervisor …. Some authors asked us to put a name of LIS teacher just to please him or to help him in his promotion” (Sabzwari, 2015, p. 1). Such practices are known to the authors of this paper through personal knowledge and interviews of many junior writers. This phenomenon needs to be investigated further.

The data presented in the grid (Table 4) clearly show the self-citing circle (as mentioned by Atanassov & Detcheva, 2014). Haroon Idrees, while using 10 out of 21 writers from the grid, gives 51 (73.91%) of the 69 citations to only four authors including 14 self-citations. Khalid Mahmood, while giving 329 citations to 20 of the 21 authors in the grid, provides 76 (23.1%) to S. J. Haider and 84 (25.5%) to himself. A higher citing pattern, self and each other, can be seen among Khalid Mahmood, S. J. Haider, Kanwal Amin, and Farzana Shafique. The impact of these practices becomes clear in the following paragraph. 

A co-relation matrix was produced using SPSS version 20 between the self-citations and various performance indices of the authors (Table 5). As shown in the table, self-citation had established significant relationships with the h-index, g-index, hI-index, hInorm, and e-index. The value of co-relation coefficient ranged from .648 to .815, meaning that the association of self-citation with the various indices under study was strong. Thus, those authors who had more self-citations 
also had higher h-index, g-index, hI-index, hI-norm, and e-index. Current results regarding the association between self-citations and performance indices confirm the finding reported in the earlier literature that self-citations of authors considerably increase their h-index (Bartneck & Kokkelmans, 2011; Couto et al., 2009; Minasny et al., 2013); and that self-citations have significant influence on the g-index (Schreiber 2008). Also, the self-citations must have an effect on the other advanced shapes of the h-index, i.e. hI-index, hI-norm, and e-index. Therefore, the values reported for five indices for various authors (see Table 3) must be taken in light of selfcitations displaed in Table 4.

Table 6 provides correlation statistics between total cited papers, total received citations, and self-citations of the 21 authors. The correlation as positive, strong, and significant at p<0.01 among all the three factors. The positive correlation between the number of cited papers and total citations means that the authors who had more numbers of cited papers received more citations. Similarly, the positive relationship between total cited papers and the number of self-citations indicates that the increasing number of cited papers also increased the number of their self-citations and vice versa. The association between total received citations and self-citations connotes that those authors who cited their own works frequently had higher numbers of total received citations and vice versa. As a result, there was strong impact of the authors’ self-citation behaviour on their total number of received citations. This result agrees with the findings of Shah, Gul, and Gaur (2015) who found a strong and statistically significant positive co-relation between the authors’ total citation and their frequency of self-citations. This practice would naturally affect the values of the indices used in this study. 



The purpose of this study was to identify the individual impact of Pakistani LIS researchers. Therefore, this study was not limited to any specific period and thus, the omission of time span of the researchers’ activities is the main limitation of this study. The other limitation is that citation-based characteristics are changing rapidly on the web. There is a possibility of the presence of some honorary / ghost authors in some cited publications (Sabzwari, 2015, p. 1).


Table 4. Anwar-Jan Grid of Intra-Group Citing (each other and self) (Top row indicates authors citing and extreme left column presents authors cited)


Table 5. Correlations Matrix between Self-Citations and Various Impact Indices


Table 6. Correlations between Total Cited Papers, Total Received Citations, and Self-Citations

Although a number of recognized limitations have made individual-level bibliometric studies controversial (Costas & Bordons, 2005; Sandström & Sandström, 2009), such studies are still considered as a useful tool to know the research impact, research performance, and collaborative tendencies, etc. of individual researchers (Costas, 2010; Lee & Bozeman, 2005). This study highlighted the impact of Pakistani LIS authors with the help of several bibliometric indicators. These indicators suggest that the research output of these authors has left a good impact on the published literature. However, it can be concluded from the results that the number of authors having received 5 or more citations (n=21) and the contribution of professional librarians is less than expected. The authors who worked in collaboration produced more papers and received more citations.

The phenomenon of citing each other clearly suggests that the authors with close social/personal contacts frequently cited each other’s work. Particularly, some of the citing and cited authors were bound in the student-teacher relationship, resulting in frequently citing teachers. The significant positive relationship of self-citations with different impact indices makes the results doubtful regarding ranking and actual scores. It is also concluded that those authors who were active on social media and who were marketing their publications via social networks had achieved higher visibility and received more citations within a short time of their professional service.

There is a need to repeat this study using the same objectives by controlling the influence of self-citations on various impact indices. Another study could be done by the identification of the citations received on the national and international level including nonweb materials. This will really disclose the influence of LIS authors on the field. The results of some papers which are either weak or have dubious procedures are nevertheless used by researchers. The effect of their use is negative but since they receive citations they gain positive impact value. There is a need to look into this phenomenon and possibly adopt a convention whereby the researchers indicate at the end of each reference whether its use was ‘positive’ or ‘negative.’ This practice would balance the impact results.




  1. Adam, D. (2002). The counting house. Nature, 415, 726-729.
  2. Aksnes, D. W. (2003). A macro-study of self-citations. Scientometrics, 56(2), 235-246.
  3. Ali, M. Y., & Richardson, J. (2016). Research publishing by library and information science scholars in Pakistan: A bibliometric analysis. Journal of Information Science Theory & Practice, 4(1), 6-20.
  4. Anees, M. A., & Iraj, M. (2015). Impact factor, fad or fallacy? The Nation, July 24, 2015, retrieved from
  5. Anwar, M. A., & Saeed, H. (1999). Pakistani librarians as authors; A bibliometric study of citations in LISA-PLUS. Asian Libraries, 8(2), 39-46.
  6. Atanassov, V., & Detcheva, E. (2014). Self-citation effect on scientometric indexes. International Journal of Information Models and Analysis, 3(1), 68-83.
  7. Bar-Ilan, J. (2008). Which h-index? - A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257-271.
  8. Bartneck, C., & Kokkelmans, S. (2011). Detecting h -index manipulation through self-citation analysis. Scientometrics, 87, 85-98.
  9. Bashir, M. (2013). Bibliometric study of Pakistan’s research output and comparison with other selected countries of the world. Asian Journal of Science and Technology, 4(5), 1-7.
  10. Batista, P. D., Campiteli, M. G., Kinouchi, O, & Martinez, A. S. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179-189.
  11. Bhatti, R. (2008). Information needs of students - Islamic University Library, Bahawalpur. Pakistan Library & Information Science Journal, 39(3), 6-21.
  12. Borgman, C. L., & Furner, J. (2002). Scholarly communication and bibliometrics. In B. Cronin (Ed.), Annual review of information science and technology (pp. 3-72). Medford, NJ: Information Today.
  13. Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008). Citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting result. Ethics in Science and Environmental Politics, 8, 93-102.
  14. Costas, R., & Bordons, M. (2005). Bibliometric indicators at the micro-level: Some results in the area of natural resources at the Spanish CSIC. Research Evaluation, 14(2), 110-120.
  15. Costas, R., Van Leeuwen, T. N., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effect of age on productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564-1581.
  16. Couto, F. M., Pesquita, C., Grego, T., & Verissimo, P. (2009). Handling self-citations using Google Scholar. Cybermetrics, 13(1), 1-7.
  17. Cronin, B. (2005). Warm bodies, cold facts: The embodiment and emplacement of knowledge claims. In Proceedings of the 10th International Conference of the International Society for Scientometrics and Informetrics, 2005, Stockholm (pp. 1-12). Karolinska University Press.
  18. Cronin, B., & Meho, L. I. (2006). Using the h-index to rank influential information scientists. Journal of the American Society for Information Science and Technology, 57(9), 1275-1278.
  19. Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1), 131-152.
  20. Harzing, A.-W. K. (2013). A preliminary test of Google Scholar as a source for citation data: A longitudinal study of Nobel Prize winners. Scientometrics, 93(3), 1057-1075.
  21. Harzing, A.-W. K. (2014). A longitudinal study of Google Scholar coverage between 2012 and 2013. Scientometrics, 98(1), 565-575.
  22. Harzing, A.-W. K. (2007). Publish or Perish. Retrieved from
  23. Harzing, A.-W. K., & Alakangas, S. (2016). Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison. Scientometrics, 106(2), 787-804.
  24. Harzing, A.-W. K., & Wal, R. van der (2009). A Google Scholar h-index for journals: An alternative metric to measure journal impact in economics and business. Journal of the American Society for Information Science and Technology, 60(1), 41-46.
  25. Harzing, A.-W. K., & Wal, R. van der (2008). Google Scholar as a new source for citation analysis. Ethics in Science and environmental Politics, 8, 61-73.
  26. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569-16572.
  27. Hussain, A., & Fatima, N. (2010). A bibliometric analysis of the Chinese librarianship: An international electronic journal (2006-2010). Chinese Librarianship: An International Electronic Journal, 31(2), 1-14.
  28. Hyland, K. (2003). Self-citation and self-reference: Credibility and promotion in academic publication. Journal of the American Society for Information Science and Technology, 54, 251-259.
  29. Jacso, P. (2009). Calculating the h-index and other bibliometric and scientometric indicators from Google scholar with the Publish or Perish software. Online Information Review, 33(6), 1189-1200.
  30. Jan, S. U., & Anwar, M. A. (2013). Impact of Pakistani authors in the Google world: A study of library and information science faculty. Library Philosophy and Practice (e-journal), Paper no. 980.
  31. Khey, D. N., Jennings, W. G., Higgins, G. E., Shoepfer, A., & Langton, L. (2011). Re-ranking the top female academic “Stars” in criminology and criminal justice using an alternative method: A research note. Journal of Criminal Justice Education, 22(1), 118-129.
  32. Khurshid, Z. (2013). Contribution of Pakistani authors to foreign library and information science journals: An evaluative study. Aslib Proceedings, 65(4), 441-460.
  33. Lee, S., & Bozeman, B. (2005). The impact of research collaboration on scientific productivity. Social Studies of Science, 35(5), 673-702.
  34. Long, H., Boggess, L. N., & Jennings, W. G. (2011). Re-Assessing publication productivity among academic "stars" in criminology and criminal justice. Journal of Criminal Justice Education, 22(1), 102-117.
  35. Macri, J., & Sinha, D. (2006). Rankings methodology for international comparisons of institutions and individuals: An application to economics in Australia and New Zealand. Journal of Economic Surveys, 20(1), 111-156.
  36. Mahmood, K., & Rehman, S. U. (2009). Contributions of Dr. Anis Khurshid to library literature: A bibliometric study. Pakistan Journal of Library & Information Science, 10, 43-56.
  37. McCallum, M. L. (2010). Characterizing author citation ratings of herpetologists using Harzing’s Publish or Perish. Herpetology Notes, 3, 239-245.
  38. McKercher, B. (2008). A citation analysis of tourism scholars. Tourism Management, 29(6), 1226-1232.
  39. Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus, and Google Scholar. Journal of the American Society for Information Science and Technology, 5, 1-21.
  40. Minasny, B., Hartemink, A. E., McBratney, A., & Jang, H.-J. (2013). Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar. PeerJ 1:e183; DOI 10.7717/peerj.183.
  41. Qayyum, M., & Naseer, M. M. (2013). Bio-bibliometric study of Dr. Khalid Mahmood's contributions to LIS field in Pakistan. Library Philosophy and Practice, paper no. 900.
  42. Razzaque, M. A., & Wilkinson, I. F. (2007). Research performance of senior level marketing academics in the Australian universities: An Exploratory Study Based on Citation Analysis. Paper Presented at Australia New Zealand Marketing Academy Conference (ANZMAC), University of Otago, New Zealand, December 1-3.
  43. Repanovici, A. (2011). Measuring the visibility of the university’s scientific production through scientometric methods: an exploratory study at the Transilvania University of Brasov, Romania. Performance Measurement and Metrics, 12(2), 106-117.
  44. Rizwan, M. M., & Saadullah, M. (2009). Impactomania. Journal of the Pakistan Medical Association, 59(6), 424.
  45. Saad, G. (2006). Exploring the h-index at the author and journal levels using bibliometric data of productive consumer scholars and business-related journals respectively. Scientometrics, 69(1), 117-120.
  46. Sabzwari, G. A. (2015). Intellectual honesty and professional integrity (editorial). Pakistan Library & Information Science Journal, 46(2), 1-2.
  47. Sandstrom, U., & Sandstrom, E. (2009). Meeting the micro-level challenges: Bibliometrics at the individual level. In 12th International Conference on Scientometrics and Informetrics (pp. 845-56). Rio de Janeiro: BIEREME/PAHO/WHO.
  48. Schreiber, M. (2008). The influence of self-citation corrections on Egghe’s g index. Scientometrics, 76(1), 187-200.
  49. Shadbolt, N., Brody, T., & Carr, L. H. S. (2006). The open research web: A preview of the optimal and the inevitable. In N. Jacobs (Ed.), Open access: Key strategic, technical and economic aspects (pp. 195-208). Oxford: Chandos.
  50. Shah, T. A., Gul, S., & Gaur, R. C. (2015). Authors self-citation behaviour in the field of Library and Information Science. Aslib Journal of Information Management, 67(4), 458-468.
  51. Sidiropoulos, A., Katsaros, D., & Manolopoulos, Y. (2007). Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics, 72(2), 253-280.
  52. Sife, A. S., & Lwoga, E. T. (2014). Publication productivity and scholarly impact of academic librarians in Tanzania: A scientometric analysis. New Library World, 115(11/12), 527-541.
  53. Singh, N. (2009). Influence of information technology in growth and publication of Indian LIS literature. Libri, 59(1), 55-67.
  54. Swain, D. K. (2011). Library Philosophy and Practice, 2004-2009: A scientometric appraisal. Library Philosophy and Practice, paper no. 556, 1-18.
  55. Snyder, H., & Bonzi, S. (1998). Patterns of self-citation across disciplines. Journal of Information Science, 24, 431-435.
  56. Weingart, P. (2005). Impact of bibliometrics upon the science system: inadvertent consequences. Scientometrics, 62(1), 117-131.
  57. White, H. D. (2001). Authors as citers over time. Journal of the Association for Information Science and Technology, 52(2), 87-108.<::AID-ASI1542>3.0.CO;2-T
  58. Yazit, N., & Zainab, A. N. (2007). Publication productivity of Malaysian authors and institutions in LIS. Malaysian Journal of Library and Information Science, 12(2), 35-55.
  59. Zhang, C.-T. (2009). The e-Index, complementing the h-Index for excess citations. PLoS One, 5(5), e5429.