1 / 92

InCites TM

InCites TM. rachel.mangan@thomsonreuters.com http://researchanalytics.thomsonreuters.com/incites/. Workshop Objectives:. After this work shop you can: Understand the basic components of Incites ( slide 3 )

Download Presentation

InCites TM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. InCites TM rachel.mangan@thomsonreuters.com http://researchanalytics.thomsonreuters.com/incites/

  2. Workshop Objectives: After this work shop you can: • Understand the basic components of Incites (slide 3) • Navigate all modules :Research Performance Profile, Global Comparisons and Institutional Profiles (RPP=slide 23, GC = slide 52, IP =71) • Understand the normalised indicators and how to use them (slide 14) • Perform an analysis of authors/departments/subject areas/collaborations using standard and normalised indicators (slide 31) • Understand the Preset reports and what they inform on • Create custom reports (slide 44) • Save and share reports with colleagues (slide 46) • Bench mark the performance of institutions/countries to global averages in multiple subject schema (slides 53-70) • Compare the performance of an institution against peer institutions across a wide range of activities and subject focus (slides 73-83) • Understand the use of citation data for the 2014 Research Excellence Frame Work and how Incites may be used to inform universities on submissions (slide 84-91) 2

  3. Objective: Understand the basic components of Incites • Incites is a customised, citation-based research evaluation tool on the web that enables you to analyse institutional productivity and benchmark your output against peers worldwide. • All bibliographic and citation data is drawn from the Web of Science • Incites platform offers 3 modules • Research Performance Profiles (RPP) • Global Comparisons (GC) • Institutional Profiles (IP) 3

  4. InCites components Research Performance Profiles Global Comparisons Institutional Profiles • Bibliometrics Driven • Comparative across institutions and countries. • Top-Level - Detail summarized at the institution and country level for various disciplines • Standardized - annual production, uniform data cut-off for all institutions • Diverse, a wide range of metrics place influence of published research into multiple perspectives for an institution. • 360o View of the world’s leading research institutions. • Academic Reputation Survey • Institution Submitted Data • Bibliometrics • A huge undertaking, providing a unique set of data, objective and subjective, and tools through which to examine and compare institutional resources, influence, and reputation. • Bibliometrics Driven • Internal View - Data for your institution’s published work. • Granular - Detail at the paper, author, discipline level, and more. • Collaboration data. • Customized - Based on customer requirements. Optional Author-Based data sets • Current - • Updated Quarterly 4

  5. Research Performance Profiles A custom-built dataset created by Thomson Reuters to match customer specifications Datasets can be compiled using the following search criteria: Address (extracting from WOS records that contain at least one occurrence of an address e.g. Univ Manchester and variants as identified by the customer) Author (extracting from WOS records that contain specific authors/ or papers as identified by the customer) Other datasets are available for topic and journal Updated quarterly from date of issue. Customers can work with Incites team to request changes for better unification to improve further updates Incites can include source articles published between 1981 and 2012 as indexed in the Web of Science Customers can extract the data to populate their CRIS systems 5

  6. Research Performance Profiles RPP can be used to inform on.. • The overall performance of research at an institution • The performance of authors • The performance of departments • The performance of collaborations • The performance of areas of research • The performance of individual papers • The performance of papers in specific journals • The impact/influence of published research • The performance of papers funded by a funding agency 6

  7. RPP- Web of Science data • All document types included that match customer specification (articles, reviews, editorials letters, etc..) • All authors indexed • Last name + initials • Variants included • Name as published • Full author name displays in Author Ranking report in Author based dataset • All address indexed • Author affiliation as published • Main organisation (e.g. Univ Manchester) displayed in RPP • Funding information from 2008 onwards • Funding Agency as published Grant numbers in the Funding Acknowledgement • Web of Science Subject Area applied at journal level • 249 WOS/JCR subject categories • Source records inherit all journal level categories (an article published in the Journal of Dental Research will inherit the categories Dentistry, Oral Surgery & Medicine) • Multidisciplinary journals categorised as ‘Multidisciplinary Sciences’ • For some multidisciplinary journals (Science, Nature, British Medical Journal etc..) articles reassigned a new WOS category based on analysis of citing/cited relationships • Journal Impact Factor from 2010 JCR • Author Keywords and Key Words Plus 7

  8. RPP- Web of Science Data 2 6 1 7 6 3 4 5 8

  9. RPP Key Metrics These metrics enable the comparison of an article(s) impact to global averages • Journal Expected Citation Rate • Average citations for records of the same type, from same journal, published in the same year • Category Expected Citation Rate • Average citations for records of same type, from same category, published in the same year • Percentile in Field • Citation performance relative to records of same document type, from same category, published in the same year. Most cited paper awarded lowest percentile (0%) and least to non-cited awarded highest percentile (100%) • H Index • Journal Actual/ Journal Expected • Ratio of the actual citation count (of a paper) to the expected count of papers published in same journal, year and document type • Category Actual/ Category Expected • Ratio of the actual citation count (of a paper) to the expected count for papers from same category , year and document type 9

  10. Global Comparisons (GC) • Global Comparisons contains aggregated comparative statistics for institutions, countries and fields of research • Built by Thomson Reuters. Common to all customers. All customers see the same data in GC • All data drawn from Web of Science (SSI, SCI • File depth from 1981-2010 • Updated annually • Data for Articles, Reviews and Research Notes • Use Institutional Comparisons to compare performance of an institution or groups of institutions overall, across fields or within fields • Institutional name variant unification (main organisation) • Use National Comparisons to compare the performance of more than 180 countries and 9 geopolitical regions overall, across fields or within fields. • Multiple Subject Categories • WOS- 249 subject categories • Essential Science Indicators – 22 broad categories • Regional Categories (UK, Australia, Brazil and China) • OECD 10

  11. Global Comparison Key Metrics Web of Science documents Times Cited Cites per document (Average Impact) % Documents Cited (at least 1 citation) Impact Relative to Subject Area (average cites of an institution in a subject area compared to the expected impact in the subject area) Impact Relative to Institution (average cites of papers in a field compared to the average cites overall for the institution) % Documents in Subject Area (market share) % Documents in Institution % Documents Cited Relative to Subject Area % Documents Cited to Relative to Institution Aggregate Performance Indicator: this metric normalises for period, document type and subject area and is a useful indicator to compare institutions of different age, size and subject focus. 11

  12. Institutional Profiles Institutional Profiles is a dynamic web-based resource presenting portraits on more than 550 of the world’s leading research institutions. Through rigorous collection, vetting, aggregation and normalization of both quantitative and qualitative date, the profiles present details on a wide array of indicators such as faculty size, reputation, funding, citation measures and more. Citation metrics from Web of ScienceSM Profile information from the institution’s themselves. Reputational data from the Global Institutional Profiles Project. Data from this project is used by THE to inform on the World University Rankings 12

  13. Key Features • Visualization tools facilitate instant comparisons of performance across a wide array of indicators and subjects : • Research Footprint™ • Trend Graph • Scatter Plot • The ability to create customized Peer Groups for continuous comparative tracking 13

  14. Objective: Understand the normalised indicators and how to use them ‘The number of times that papers are cited is not in itself an informative indicator; citation counts need to be benchmarked or normalised against similar research. In particular citations accumulate over time, so the year of publication needs to be taken into account; citation patterns differ greatly in different disciplines, so the field of research needs to be taken into account; and citations to review papers tend to be higher than for articlesand this also needs to be taken into account.’ Source REF Pilot Study 14

  15. NORMALISATION • It is necessary to normalise absolute citation counts for: • Document type (reviews cited more than articles, some document types cited less readily) • Journal where published • Year of publication (citations accumulate over time) • Category (there is a marked difference in citation activity between categories) • Golden rule: Compare like with like 15

  16. Is this a high citation count? • This paper has been cited 4148 times. • How does this citation count compare to the expected citation count of other articles published in the same journal, in the same year? • It is necessary to normalise for: • Journal = Nature Materials • Year = 2007 • Document type = article 16

  17. Create a benchmark- the expected citations Search for papers that match the criteria Run the Citation Report on the results page 17

  18. Create a benchmark- the expected citations Articles published in ‘Nature Materials’ published in 2007 have been cited on average 137.75 times. This is the Expected Count We compare the total citations received to a paper to what is expected 4148 (Journal Actual) / 137.75 (Journal Expected) = 30.11 The paper has been cited 30.11 times more than expected. We call this Journal Actual/Journal Expected 18

  19. Percentile in Field. How many papers in the dataset are in the top 1%, 5% or 10% in their respective fields? This is an example of the citation frequency distribution of a set of papers in a given category,database year and document type. The papers are ordered none/least cited on the left, moving to the highest cited papers in the set on the right. We can assign each paper to a Percentile in the set. In any given set, there are always many low cited/ none cited papers (bottom 100%) In any given set, there are always few highly cited papers (top 1%) 100% 50% 0% Only document types article, note, and revieware used to determine the percentile distribution, and only those same article types receive a percentile value. If a journal is classified into more than one subject area, the percentile is based on the subject area in which the paper performs the best, i.e. lowest value 19

  20. No All Purpose Indicator This is a list of a number of different purposes a university might have for evaluating its research performance. Each purpose calls for particular kinds of information. Identify the question the results will help to answer and collect the data accordingly 20

  21. Incites Access • http://incites.isiknowledge.com • Enter username and password • or • IP Authentication 21

  22. Incites Start Page These are the modules. Click on ‘Get Started’ to open a module 22

  23. Objective: Navigate the two principal modules:1. Research Performance Profiles Create a custom report to analyse a subset of papers Run a preset report on the whole dataset • RPP is custom built for each institution • Article level statistics • Aggregations as a whole dataset or create custom subsets 23

  24. Executive Summary- an overall synopsis • 107, 781 source papers • 1979-2011 timespan • 949,293 citing papers • Green bar = papers published per year, scale on left side • Blue bar = citations received to papers published in that year, scale on right side • Tables to highlight frequently occurring authors, subject areas and most cited authors 24

  25. Source Article Listing-Paper level metrics Article citation data and normalised metrics Article bibliographic information Click on article title to navigate to the record in Web of Science • Order the papers by the metrics available in drop down menu • Times Cited • Percentile in Field • 2nd Generation Citations 25

  26. Source Article Listing Key Metrics- for individual paper evaluation 26

  27. Summary Metrics- a dashboard of performance indicators Citation data and normalised metrics which give an overview of the overall performance of the papers in the data set Percentile Graph For each percentile range, the “expected” number of papers (article, review & notes) in each would be equal to that same “Percentile”, meaning… We’d expect 5% of this institutions papers to rank in the 5th Percentile. However, 6.79% of this institution’s papers rank in the 5th Percentile. 6.79% - 5% = 1.79% Therefore, the number of papers this institution has placed in the top 5% of all papers published exceeds what is expected by 1.79% This 1.79% is what is presented on the graph, in Green because it exceeded the expected. Below-expected would be presented in Red 27

  28. Summary Metrics Key Indicators (for an author, institution, department..) 28

  29. Funding Agency Listing Click on the WOS document column to view the papers funded by the agency Order the Funding Agencies by the indicators in the drop down menu 29

  30. Article Type Listing Use the Article Type Listing to examine the weighting of each document type in the dataset and differences in performance/ impact between the document types 30

  31. Objective: Perform analysis of authors/collaborations/subject areas using citation data and normalised metrics 31

  32. Author Ranking Report • Order authors using the citation and normalised metrics in the menu • It may be necessary to establish thresholds to focus on authors who achieve a minimum parameter such as: • Papers published • Citations received • Create an ‘Author Ranking Report’ in Custom Reports and establish the thresholds required. Click on any data value to view the Author Profile Report 32

  33. Author Profile Report • A profile of an authors performance including: • Collaborations • Subject focus • Publication activity • Citation Impact 33

  34. Author Ranking Report 34

  35. Author Ranking Report for Author Dataset • Full author names • Only authors who have been identified by the customer appear in this report • Less contamination from co-authors from other institutions as viewed in an Address Dataset 35

  36. Author Ranking Key Metrics 36

  37. Time Series and Trend Report • Total citations received to papers published in an individual year. • E.g. Papers published in 1981 have received 23,789 citations. Raw data in table below • Papers published per year. • 1981= 1833 documents. Raw data in table below • Average citations to papers published in an individual year. • Papers published in 1981 have been cited an average of 12.98 times. • Raw data in table below. • Use this indicator to identify the year/s in which the research had the highest average impact. 37

  38. Collaborating Institutions Report • Order the collaborations using the indicators in the menu. • The Collaborating Institutions report is extremely important in not only identifying most frequent collaborating institutions, but those collaborations producing the most influential research. In practical terms, one can identify collaborations that produce the most return on investment . • Sorting by Category Actual/Expected Cites is an easy way to identify this. • Customise this report to focus on collaborations that meet a minimum threshold. 38

  39. Collaborating Countries Report • Order the country level collaborations using the indicators in the menu. • Customise this report to focus on collaborations that meet a minimum threshold. 39

  40. Collaboration Reports Key Metrics 40

  41. Subject Area Ranking Report • Order the subject areas using the indicators in the menu. • Use this report to determine the intensity of publication output for each subject area and compare the performance of papers across disciplines. 41

  42. Journal Ranking Report • Order the journals using the indicators in the menu. • Use this report to identify the journals in which the source papers are published and compare the performance of papers in these journals using the standard and normalised metrics. 42

  43. Impact and Citation Ranking Reports • 949,293 Citing Papers in dataset • Examine the citing papers to determine: • Who is influenced (authors, institutions) • Where is the influence (countries) • What is influenced (fields, journals and article type) 43

  44. Objective: Create Custom Reports 1. Specify a report type from the menu 2. Select the metrics to be included in the report 4. Use the delimiters to create a custom dataset 3. Set the time period 5. You can preview the papers that match the parameters specified, run the report or save the selections 44

  45. Create Custom Reports- Preview Documents Save your Refined Collection to ‘Folders’ Use the Refine Document Collection to refine your custom dataset 45

  46. Objective: Save and share reports with colleagues 46

  47. Folders • My Saved Reports • Save reports you generate • My Saved Custom Report Selections • Save selections for the report you frequently run • My Saved Document Collections • Save collections (subset) of the documents • Shared Reports • Shared Custom Report Selections • Shared Document Collections 47

  48. Save Selections Provide a title for your saved selection Save your selection to ‘My Folders’ 48

  49. Open a Custom Report Create a folder, share the report or delete Click on the title of report to open it 49

  50. Shared Reports Click on the title of any report in the ‘Shared Reports’ folder to open it 50

More Related