Searching for Elusive Arctic Dataset Citations

Data Fellowship Project 2022

Author
Affiliation
Althea Marks
Published

December 5, 2022

1 Purpose

Run ADC DOIs through scythe & compare to known DataONE metrics citations. Known ADC citations have mixed origins including DataCite, previous scythe runs, and manual additions via the ADC UI.

2 Questions

  1. Does the addition of the xDD digital library to the Scythe package improve the quality and scope of citations in the ADC? Does increasing the number of sources we are searching result in more complete coverage (quality)?

    • Overlap in citation among sources

    • Species ratification curve inspired - start to get to a point where we can estimate the actual amount of citation out there. Dataset citations are rare enough the technique may not be applicable. Rarefaction.

    The calculation of species richness for a given number of samples is based on the rarefaction curve. The rarefaction curve is a plot of the number of species against the number of samples. This curve is created by randomly re-sampling the pool of N samples several times and then plotting the average number of species found on each sample. Generally, it initially grows rapidly (as the most common species are found) and then slightly flattens (as the rarest species remain to be sampled). source

    Would this be sampling the entireity of ADC DOIs?

  2. Does the prevalence of data citations differ among disciplines (enviro vs SS)?

  1. Total number of citations is extremely useful. Ground truth analysis - for a small number of datasets manually search through literature for citations.

  2. Do usage metrics (downloads and views) correlate well with citation metrics?

3 Methods Overview

  • Gather existing/known ADC dataset citations picked up by the automated DataONE metrics API

  • Get a list of all ADC dataset DOIs

  • Run all ADC dataset DOIs through scythe libraries

  • Review HTTP errors and rerun

  • Calculate citation source overlap

  • Compare citations from scythe to DataONE metrics

4 R Setup

Code
# set date here. Used throughout data collection, saving, and analysis. YYYY-MM-DD
#date <- "2022-07-14"
date <- "2022-11-03"

# vector of APIs used in analysis
source_list <- c("scopus", "springer", "plos", "xdd")

# ADC color palette
adc_color <- c("#19B36A", "#B5E1E7", "#1B897E", "#7AFDB1", "#1b897E", "#1D254E")

# load libraries
source(file.path("./R/load_pkgs.R"))

# create directories and file paths 
source(file.path("./R/analysis_paths.R"))

# functions for data collection and analysis
source(file.path("./R/functions.R"))

5 Search For Citations

5.1 Current known ADC citations

Use GET API request body in DataOne Metrics Service production endpoint: https://logproc-stage-ucsb-1.test.dataone.org/metrics documentation: https://app.swaggerhub.com/apis/nenuji/data-metrics/1.0.0.3

Code
{
  "metricsPage":{
    "total":0,
    "start":0,
    "count":0
  },
  "metrics":["citations"],
  "filterBy":[{
    "filterType":"repository",
    "values":["urn:node:ARCTIC"],
    "interpretAs":"list"
  },
  {
    "filterType":"month",
    "values":["01/01/2012",
              "05/24/2022"],
    "interpretAs":"range"
  }],
  "groupBy":["month"]
}

Example request:

Code
https://logproc-stage-ucsb-1.test.dataone.org/metrics?metricsRequest={%22metricsPage%22:{%22total%22:0,%22start%22:0,%22count%22:0},%22metrics%22:[%22citations%22],%22filterBy%22:[{%22filterType%22:%22repository%22,%22values%22:[%22urn:node:ARCTIC%22],%22interpretAs%22:%22list%22},{%22filterType%22:%22month%22,%22values%22:[%2201/01/2012%22,%2205/24/2022%22],%22interpretAs%22:%22range%22}],%22groupBy%22:[%22month%22]
}
Code
# Run ADC API Get call, unnest target_id results to individual columns
dataone_cit <- metrics_citations(to = as.POSIXct(date)) # use analysis date to constrain search

dataone_cit <- tidyr::unnest(dataone_cit,
                          cols = c(target_id, source_id, source_url,
                                   link_publication_date, origin, title,
                                   publisher, journal, volume, page, year_of_publishing))

write_csv(dataone_cit, file.path(output_directory,
                                 paste0("dataone_metrics_cit_", date,".csv")))

5.2 Query SOLR

DataOne metrics API can only provide data package DOIs with citations, and can not provide a comprehensive list of all data package DOIs contained within the ADC. To search through all the repository metadata we query the DataONE search index (Apache SOLR search engine). SOLR is the same underlying mechanism that DataONE uses in the online tool and can create complex logical query conditions.

Tip

Call dataone::getQueryEngineDescription(cn, "solr") to return a complete list of searchable SOLR values

5.2.1 Get all ADC DOIs

Code
# set coordinating node
cn <- dataone::CNode("PROD")

# point to specific member node
mn <- dataone::getMNode(cn, "urn:node:ARCTIC")

# set up Solr query parameters
queryParamList <- list(q="id:doi*", 
                       fl="id,title,dateUploaded,datasource",
                       start ="0",
                       rows = "100000") # set number to definitely exceed actual number
# use `q = "identifier:doi* AND (*:* NOT obsoletedBy:*)"` to only include current versions of data packages 
# DataOne aggregates citations across dataset versions

# send query to Solr, return results as dataframe
solr_adc_result <- dataone::query(mn, solrQuery=queryParamList, as="data.frame", parse=FALSE)

write.csv(solr_adc_result, file.path(output_directory, 
                                     paste0("solr_adc_", date, ".csv")))

5.2.2 Get all ADC discipline ontology classifications

The ADC created a research discipline ontology to classify datasets. Here is the root of ADC discipline semantics annotations. Classes/ID is where to look for query specifics. Below is an example SOLR query that looks for two of those disciplines:

https://cn.dataone.org/cn/v2/query/solr/?q=sem_annotation:*ADCAD_00077+OR+sem_annotation:*ADCAD_00005&fl=identifier,formatId,sem_annotation
Note

Every single SS ID will need to be listed in the query list. Solr currently not set up to query umbrella SS ID yet.

Code
# Run second Solr query to pull semantic annotations for 2022_08_10 DOIs

# set up Solr query parameters
discQueryParamList <- list(q = "id:doi* AND (*:* NOT obsoletedBy:*)",
                       fl = "id,title,dateUploaded,datasource,sem_annotation",
                       start ="0",
                       rows = "100000")

# send query to Solr, return results as dataframe. parse = T returns list column, F returns chr value
solr_adc_sem <- dataone::query(mn, solrQuery=discQueryParamList, as="data.frame", parse=T)

# POSSIBLE BREAK POINT - read in url
# read in csv with coded discipline ontology
adc_ont <- read.csv("https://raw.githubusercontent.com/NCEAS/adc-disciplines/main/adc-disciplines.csv") %>% 
  # use ontology id to build id url - add required amount of 0s to create 5 digit suffix
    mutate(an_uri = paste0("https://purl.dataone.org/odo/ADCAD_", 
                           stringr::str_pad(id, 5, "left", pad = "0")))

solr_adc_sem$category <- purrr::map(solr_adc_sem$sem_annotation, function(x){
    t <- grep("*ADCAD*", x, value = TRUE)
    cats <- c()
    for (i in 1:length(t)){
        z <- which(adc_ont$an_uri == t[i])
        cats[i] <- adc_ont$discipline[z]
        
    }
    return(cats)
})

# extract discipline categories from single column to populate new columns
disc_adc_wide <- solr_adc_sem %>% 
  unnest_wider(category, names_sep ="_") %>% 
  select(-sem_annotation, -datasource, -title) %>% 
  rename("dataset_id" = id)

write.csv(disc_adc_wide,
          file.path(output_directory, paste0("solr_adc_", date, "_disc.csv")),
          row.names = FALSE)

SOLR query does not yet include date search term to align with date object. Using date to save and read in .csv

5.3 Run DOIs through scythe

Code
# read in saved SOLR results
solr_adc_result_csv <- read_csv(file.path(output_directory, 
                                          paste0("solr_adc_", date, ".csv")))
# create vector of all ADC DOIs from solr query `result`
adc_all_dois <- c(solr_adc_result_csv$id)

APIs can have request rate limits. These specific rates are often found in the API documentation or the API response headers. If request rate limits are exceeded, API queries will fail.

Code
# Scopus request Limits
key_scopus <- scythe::scythe_get_key("scopus")
url <- paste0("https://api.elsevier.com/content/search/scopus?query=ALL:",
  "10.18739/A2M32N95V",
  paste("&APIKey=", key_scopus, sep = ""))

curlGetHeaders(url)
# [15:17] shows "X-RateLimit-Limit:", "X-RateLimit-Remaining:", and "X-RateLimit-Reset:" (Unix epoch is the number of seconds that have elapsed since January 1, 1970 at midnight UTC time minus the leap seconds)

# Springer request Limits
# 300 calls/min and 5000/day
# not found in response header, received email from springer that I was exceeding their rates above

#key_spring <- scythe::scythe_get_key("springer")
#url_spring <- paste0("http://api.springernature.com/meta/v2/json?q=doi:10.1007/BF00627098&api_key=", key_spring)
#curlGetHeaders(url_spring)

Run each library search in parallel in separate background jobs to keep console available to work with. By default job::job() imports the global environment into the background job.

Note

scythe::scythe_set_key() is a wrapper for the key_ring package. An interactive password prompt is required to access the API keys stored in key_ring. This does not work within a background job environment; your keyring needs to be temporarily unlocked with keyring::keyring_unlock("scythe", "your password") replace password in the next code chunk with your actual keyring password.

Warning

Be careful not to save, commit, or push your personal keyring password.

Code
# Run each source/library search in a separate background job. Running a for loop will return incomplete results if API query fails, which is better than loosing all progress because of a single error in a single vector call.  

key <- "password"

# Set up empty results data.frames
citations_scopus <- data.frame()
citations_springer <- data.frame()
citations_plos <- data.frame()
citations_xdd <- data.frame()


######### Scopus
job::job({
  for (i in seq_along(adc_all_dois)) {
    # access API keys within background job environment
    keyring::keyring_unlock("scythe", key)
    # suppress errors and continue loop iteration
    result <- tryCatch(citation <- scythe::citation_search(adc_all_dois[i], "scopus"),
                       error = function(err) {
                         data.frame("article_id" = NA,
                                    "article_title" = NA,
                                    "dataset_id" = adc_all_dois[i],
                                    "source" = paste0("scopus ", as.character(err)))
                         }
                       )
    citations_scopus <- rbind(citations_scopus, result)
    write.csv(citations_scopus, path_scopus, row.names = F)
  }
}, title = paste0("scopus citation search ", Sys.time()))


######### PLOS
job::job({
  for (i in seq_along(adc_all_dois)) {
    # access API keys within background job environment
    keyring::keyring_unlock("scythe", key)
    # suppress errors and continue loop iteration
    result <- tryCatch(citation <- scythe::citation_search(adc_all_dois[i], "plos"),
                       error = function(err) {
                         data.frame("article_id" = NA,
                                    "article_title" = NA,
                                    "dataset_id" = adc_all_dois[i],
                                    "source" = paste0("plos", as.character(err)))
                         }
                       )
    citations_plos <- rbind(citations_plos, result)
    write.csv(citations_plos, path_plos, row.names = F)
    }
}, title = paste0("plos citation search ", Sys.time()))


########## XDD
job::job({
  for (i in seq_along(adc_all_dois)) {
    # access API keys within background job environment
    keyring::keyring_unlock("scythe", key)
    # suppress errors and continue loop iteration
    result <- tryCatch(citation <- scythe::citation_search(adc_all_dois[i], "xdd"),
                       error = function(err) {
                         data.frame("article_id" = NA,
                                    "article_title" = NA,
                                    "dataset_id" = adc_all_dois[i],
                                    "source" = paste0("xdd", as.character(err)))
                         }
                       )
    citations_xdd <- rbind(citations_xdd, result)
    write.csv(citations_xdd, path_xdd, row.names = F)
    }
}, title = paste0("xdd citation search ", Sys.time()))


########## Springer
# divide ADC corpus into chunks less than Springer's 5,000/day request limit
springer_limit <- 4995
num <- seq_along(adc_all_dois)
chunk_list <- split(adc_all_dois, ceiling(num/springer_limit))


job::job({
  for(chunk in seq_along(chunk_list)){
    # pause api query for > 24hrs between chunk runs
    if(chunk != 1){Sys.sleep(87000)}
    for (i in seq_along(chunk_list[[chunk]])){ 
      # access API keys within background job environment
      keyring::keyring_unlock("scythe", key)
      # suppress errors and continue loop iteration
      result <- tryCatch(citation <- scythe::citation_search(chunk_list[[chunk]][i], "springer"),
                         error = function(err) {
                           data.frame("article_id" = NA,
                                      "article_title" = NA,
                                      "dataset_id" = chunk_list[[chunk]][i],
                                      "source" = paste0("springer ", as.character(err)))
                         }
      )
      citations_springer <- rbind(citations_springer, result)
      #write.csv(citations_springer, path_springer, row.names = F)
    }
  }
}, title = paste0("springer citation search ", Sys.time())
)

Springer’s API query limits affected how we ran our search. We decided to break the list of ADC DOIs into < 5,000 DOI chunks and run each chunk through the API with 24hrs in between the last query and starting the next DOI chunk. We could have changed the base scythe function citation_search_springer() to slow down to accommodate both request limits, but this would substantially slow down the function and make smaller DOIs queries slow and cumbersome.

Code
######### Springer

# divide ADC corpus into chunks less than Springer's 5,000/day request limit
springer_limit <- 4995
length(adc_all_dois) / springer_limit
chunk_1 <- adc_all_dois[1:springer_limit]
chunk_2 <- adc_all_dois[(springer_limit+1):(springer_limit*2)]
chunk_3 <- adc_all_dois[((springer_limit*2)+1):length(adc_all_dois)]

# change "chunk_x" object to search next chunk of DOIs. Must wait 24 hrs from last request. 
doi_chunk = chunk_3

job::job({
  for (i in seq_along(doi_chunk)){ 
    # access API keys within background job environment
    keyring::keyring_unlock("scythe", key)
    # suppress errors and continue loop iteration
    result <- tryCatch(citation <- scythe::citation_search(doi_chunk[i], "springer"),
                       error = function(err) {
                         data.frame("article_id" = NA,
                                    "article_title" = NA,
                                    "dataset_id" = doi_chunk[i],
                                    "source" = paste0("springer ", as.character(err)))
                         }
                       )
    citations_springer <- rbind(citations_springer, result)
    write.csv(citations_springer, path_springer, row.names = F)
    }
}, title = paste0("springer citation search", Sys.time())
)

5.4 Dealing with errors

The tryCatch() functions in the above search for loops records errors produced from any API request or scythe function. The corresponding DOIs are extracted and rerun through scythe a second time. When running the DOIs with errors through Scopus we discovered two bugs in the scythe code. The first bug was fixed here. The second bug was a query return that did not have a DOI (conference proceedings).

Code
# Extract DOIs that error during scythe queries

# read in raw results .csv into a list of dataframes
results_list <- lapply(source_list, FUN = mk_result_list)
# assign source names to list elements
names(results_list) <- source_list

# pull dataframe rows that had API request errors
error_list <- lapply(results_list, FUN = did_this_error)

# run error DOIs back through scythe
error_query_results <- sapply(error_list, FUN = query_errors, source_list)

# write error re-run results to .csv
map2(error_query_results, source_list, write_error_results)
Note

Running a second round of API queries using error DOIs is semi-automated above. Future script users will likely need to adjust the above code chunk to combine 1st and 2nd run results for analysis.

Code
# This code was used during the '2022-07-08' scythe run. 
## Scopus
citations_error_scopus <- data.frame()
job::job({
  for (i in seq_along(doi_error_scopus)) {
    # access API keys within background job environment
    keyring::keyring_unlock("scythe", key)
    # suppress errors and continue loop iteration
    result <- tryCatch(citation <- scythe::citation_search(doi_error_scopus[i], "scopus"),
                       error = function(err) {
                         data.frame("article_id" = NA,
                                    "article_title" = NA,
                                    "dataset_id" = doi_error_scopus[i],
                                    "source" = paste0("scopus ", as.character(err)))
                         }
                       )
    citations_error_scopus <- rbind(citations_error_scopus, result)
  }
}, title = paste0("scopus error citation search ", Sys.time()))

# save search results from errored DOI
write.csv(citations_error_scopus,
          file.path(output_directory, paste0("scythe_", date, "_scopus_error.csv")),
          row.names = F)
# 2022-07-14 scopus errors were incorporated into cits_scopus at some point. Not reflected in this code script.

######### PLOS
citations_error_plos <- data.frame()
job::job({
  for (i in seq_along(doi_error_plos)) {
    # access API keys within background job environment
    keyring::keyring_unlock("scythe", key)
    # suppress errors and continue loop iteration
    result <- tryCatch(citation <- scythe::citation_search(doi_error_plos[i], "plos"),
                       error = function(err) {
                         data.frame("article_id" = NA,
                                    "article_title" = NA,
                                    "dataset_id" = doi_error_plos[i],
                                    "source" = paste0("plos", as.character(err)))
                         }
                       )
    citations_error_plos <- rbind(citations_error_plos, result)
    }
}, title = paste0("plos error citation search ", Sys.time()))
# empty dataframe return means no citations found and no HTTP errors

6 Analysis / Results

6.1 Does addition of xDD improve quality & scope of ADC dataset citations?

Does increasing the number of sources we are searching result in more complete coverage/quality?

Code
# read in saved scythe results for all sources `cits_source` objects created
# reduces dependency on global environment objects - can pick up analysis here instead of rerunning scythe. Add error re-run results if detected.

for(i in source_list){
    path <- eval(parse(text = paste0("path_", i)))
    if(file.exists(path)){
      assign(paste0("cits_",i), 
             if(file.exists(paste0(path_error, i, "_err_res.csv"))){
               rbind(read_csv(file.path(path)), 
                     read_csv(file.path(paste0(path_error, i, "_err_res.csv"))))
             } else(read_csv(file.path(path)))
      )
    } else{print(paste0(i, " saved scythe results do not exsist in output directory"))
      }
    }

# read in saved combined results if already exist, create and save if not
if(file.exists(path_all)) {
  scythe_cit <- read_csv(path_all)
} else{
  scythe_cit <- rbind(cits_scopus, 
                      cits_springer,
                      cits_plos,
                      cits_xdd) %>%
    filter(!is.na(article_id)) # remove NA/error observations
           #grepl(dataset_id, pattern = "^10.18739.*")) # remove datasets not housed on the 
  write_csv(scythe_cit, path_all)
}
Code
# create mini dataframe to populate total citations in summary table
scythe_total <- tibble("source" = "Total",
                       "num_cit" = length(scythe_cit$dataset_id),
                       "num_datasets" = length(unique(scythe_cit$dataset_id)))
  
# summary table + cheater total row
scythe_sum <- scythe_cit %>% 
  group_by(source) %>% 
  summarise("num_cit" = length(source),
            "num_datasets" = length(unique(dataset_id))) %>% 
  rbind(scythe_total)

scythe_sum$source <- c("PLOS", "Scopus", "Springer", "xDD", "Total")

knitr::kable(scythe_sum, 
             col.names = c("Source", "Number of Citations", "Number of Datasets"))
Table 1: Raw Results from Scythe Search of ADC DOIs
Source Number of Citations Number of Datasets
PLOS 40 36
Scopus 706 369
Springer 191 93
xDD 327 193
Total 1264 501

6.1.1 Do citation sources overlap in coverage?

We evaluated the redundancy in dataset citations found among sources by matching citations between source search results. A citation is defined by the unique combination of article_id and dataset_id. Percent overlap is the total number of citations found in a source also found in a second source, divided by the total number of citation found within the source.

Code
# summarize the sources that each citation is found in for table
overlap <- scythe_cit %>% 
  group_by(dataset_id, article_id) %>% 
  summarize(source_combination = paste(source, collapse = "&")) %>% 
  group_by(source_combination) %>% 
  summarize(n = n())

# Create euler diagram of overlap
# Color blind friendly color pallet
#show_col(viridis(30, option = "C"))
# viridis color palette
#overlap_color <- c("#AB2494FF", "#DE6065FF", "#FCA338FF", "#F0F921FF")
overlap_color <- c("#19B36A", "#B5E1E7", "#1B897E", "#7AFDB1")
ovrlp_vec <- setNames(overlap$n, as.character(overlap$source_combination))
fit <- euler(ovrlp_vec)
euler_fig <- plot(fit, 
     quantities = TRUE,
     fills = list(fill = overlap_color),
     labels = c("PLOS", "Scopus", "Springer", "xDD"))

euler_fig
ggsave(euler_fig, 
       filename = file.path(output_directory, paste0("scythe_", date, "overlap_fig.png")), 
       dpi = 600,
       scale = 1,
       units = "in",
       width = 6)
# 

Figure 1: Citation Source Overlap: Number of citations found in multiple sources and number of citations found uniquely in only one source.

Table 2: Percent overlap between Scythe sources
Source Total Citations % in PLOS % in Scopus % in Springer % in xDD
PLOS 40 100.0 30.0 0.0 0.0
Scopus 706 1.7 100.0 10.3 16.6
Springer 191 0.0 38.2 100.0 0.5
xDD 327 0.0 36.1 0.3 100.0

Scopus found 503 unique citations not found in any other digital libraries. Springer found 117, PLOS 28, and xDD 208 unique citations respectfully. The total number of unique citations returned by scythe is 1060.

6.2 Do Dataset Citations Differ Among Research Disciplines?

The Arctic Data Center uses a semantic ontology to classify academic disciplines of datasets. Datasets can be labeled with up to 5 disciplines. This enables datasets to be more easily found with search terms. The ADC’s ontology can be found here: https://bioportal.bioontology.org/ontologies/ADCAD/?p=classes&conceptid=root.

Note

DOI (Digital Object Identifier) is one system of unique persistent identification (PID). Different PIDs may be used in different academic disciplines. For example, genetics/bioinformatic studies often use accession numbers from the GenBank repository to uniquely label DNA and protein sequences. This analysis is limited to citations specifically using DOIs; citations using dataset titles or other PIDs are not included.

Code
# Meld manual dataset discipline categorization with Solr categories
disc_adc <- read_csv(file.path(output_directory, 
                               paste0("solr_adc_", date, "_disc.csv"))) %>% 
  select(1,3:7) # remove data uploaded column

# read in manual dataset discipline classifications. Remove extra columns
disc_manual <- read_csv(file.path(data_dir, "adc-discipline-2022-10-27.csv"))[1:6]
colnames(disc_manual)[1] <- "dataset_id"

# merge discipline classifications, prioritize my own manual classifications for 
# sake of this analysis.
# There is a better way to do this, I just need to get this done 
disc_all <- full_join(disc_manual, disc_adc) %>% 
  mutate(disc_cat_1 = ifelse(is.na(disc_cat_1), category_1, disc_cat_1),
         disc_cat_2 = ifelse(is.na(disc_cat_2), category_2, disc_cat_2),
         disc_cat_3 = ifelse(is.na(disc_cat_3), category_3, disc_cat_3),
         disc_cat_4 = ifelse(is.na(disc_cat_4), category_4, disc_cat_4),
         disc_cat_5 = ifelse(is.na(disc_cat_5), category_5, disc_cat_5)
  ) %>% 
  select(-c(category_1, category_2, category_3, category_4, category_5)) %>% 
  mutate(dataset_id = sub(pattern = "^doi:{1}", dataset_id, replacement = "" ))

# assign dataset classifications to found scythe citations
scythe_cit_disc <- left_join(scythe_cit, disc_all) %>% 
  distinct(article_id, dataset_id, .keep_all = TRUE) 

# transform 5 discipline classification columns into single column - multiple rows per dataset
scythe_cit_disc_l <- scythe_cit_disc %>% 
  pivot_longer(cols = 5:9, names_to = NULL) %>% 
  na.omit()

# summarize disc classifications - sum number of citations per category
cit_disc <- scythe_cit_disc_l %>% 
  group_by(value) %>% 
  summarise(n_cit = length(unique(article_id, dataset_id)))


knitr::kable(cit_disc, 
             col.names = c("Dataset Discipline", "Number of Citations")) #%>% 
Table 3: Number of citations found by discipline
Dataset Discipline Number of Citations
Archaeology 1
Atmospheric Science 73
Biochemistry 4
Chemistry 2
Cryology 356
Ecology 26
Forestry 2
Geochemistry 6
Geodesy 1
Geology 1
Geophysics 6
Glaciology 42
Human Geography 3
Hydrology 38
Microbiology 1
Oceanography 228
Physical Geography 5
Physics 1
Plant Science 10
Soil Science 27
Zoology 3
Code
  #kableExtra::scroll_box(width = "500px", height = "200px")
Code
# academic discipline ontology fit to ADC datasets and scythe citations
source(file.path("./R/ontology_hierarchy.R"))
source(file.path("./R/sunburst_discipline.R"))

# all levels - hydrology broken into individual leaves
sun_all <- sunburstR::sunburst(sun_levels_all, 
                               legend=FALSE,
                               #percent=TRUE,
                               count=TRUE,
                               color = adc_color)
sun_all
Legend

Figure 2: scythe dataset citations grouped by academic discipline.

Of the 1060 unique dataset citations found by scythe, datasets classified as Cryology, and Oceanography constituted the vast majority of citations. 356, and 228 citations respectively (Table 3 and Figure 2).

6.3 Citations overtime

Code
# analysis date as date object
date_date <- as.Date(date)

date_uploaded <- read_csv(file.path(output_directory, 
                                    paste0("solr_adc_", date, "_disc.csv"))) %>% 
  select(1:2) %>% 
  mutate(dateUploaded = as.Date(dateUploaded)) %>% 
  mutate(age = date_date - dateUploaded) %>% 
  mutate(dataset_id = sub(pattern = "^doi:{1}", dataset_id, replacement = "" ))
Warning: One or more parsing issues, see `problems()` for details
Code
scythe_cit_date <- scythe_cit %>% 
  left_join(date_uploaded) %>% 
  na.omit(dateUploaded) # remove citations without date uploaded info

scythe_cit_date_sum <- scythe_cit_date %>% 
  group_by(dataset_id, age) %>% 
  summarize("num_cit" = length(article_id))

date_graph <- scythe_cit_date_sum %>% 
  ggplot(aes(x = age, y = num_cit)) +
  geom_point() + 
  theme_classic() +
  labs(x = "# days dataset has been available to analysis date",
       y = "number of citations found by scythe")
date_graph

Figure 3: Number of dataset citations per dataset as related to number of days publically available on the ADC to date of analysis (2022-11-03)

It looks like there was an event ~700 days. Possibly State of the Arctic report metatdata records showing up? almost 2 years ago? Overall does not appear to have an obvious relationship with time. Low number of citations per dataset mostly. 243 datasets had 888 citations found by scythe & had date_uploaded data from Solr.

6.4 How many citations found by scythe are already known to DataOne Metrics Service?

do_cit_src_07 came from Rushiraj in July. do_cit_src_07 has the record of how citations entered the DataOne Metrics Service: Crossref, Metrics Service Ingest, and ORCID. Metrics Service Ingest is previous scythe runs. I cross referenced the scythe citation results with both DataOne metrics citation lists and look at the distribution of citation sources.

Is this going to be apart of AGU? Interesting to others?

Code
do_cit_src_07 <- readr::read_csv(file.path(data_dir, "dataone_cits_report_2022_07_25.csv"))

# source_id = 'Unique identifier to the source dataset / document / article that cited the target dataset '
# target_id = 'Unique identifier to the target DATAONE dataset. This is the dataset that was cited.'

# clean up dataone citation reporter csv. Remove extra ' from character strings
do_cit_src_07 <- as.data.frame(lapply(do_cit_src_07, gsub, pattern = "*'*", replacement = ""))

# rename dataone metrics citations columns to match scythe results
# replace unique Orcid # with "ORCiD"
do_cit_src_07 %<>%
  rename("article_id" = source_id, 
         "dataset_id" = target_id) %>% 
  mutate(reporter = sub("^http.*","ORCiD", do_cit_src_07$reporter))

do_cit_source_sum <- do_cit_src_07 %>% 
  group_by(reporter) %>% 
  summarise(num_cit = n()) 

do_cit_source_fig <- do_cit_source_sum %>% 
  ggplot(aes(reporter, num_cit)) +
  geom_col() +
  coord_flip() +
  theme_minimal() +
  theme(panel.grid.major.y = element_blank(),
        axis.text.x=element_blank()) +
  scale_y_continuous(limits = c(NA, 1300)) +
  geom_text(aes(label = num_cit), hjust = -0.5) +
  labs(x = "",
       y = "Number of Citations",
       caption = "Total citations count July 2022: 2035")

do_cit_source_fig

Figure 4: Dataset Citation Reporting Sources From the DataOne Arctic Data Center Metrics Service

Code
unique_citations <- scythe_cit %>% 
  distinct(article_id, dataset_id)

scythe_cit_new <- anti_join(unique_citations, do_cit_src_07, by = c("article_id", "dataset_id")) %>% 
  na.omit()
# have 642 new scythe citations not found in dataone metrics

# Citations in dataone metrics that also show up in latest scythe search `unique_citations`
# These are the dataone metrics and scythe overlap citaitons 
scythe_in_dataone <- semi_join(do_cit_src_07, unique_citations, by = c("article_id", "dataset_id"))


scythe_in_do_sum <- scythe_in_dataone %>% 
  group_by(reporter) %>% 
  summarise(num_cit = n())

knitr::kable(scythe_in_do_sum, 
             col.names = c("Source", "Number of Citations"))

?(caption)

scythe found 1060 qunique new citations not currently in the DataOne Metrics Service in November 2022. Need to functionalize code for analysis between 2 run dates.

This could be a figure - proportion columns

Query CrossRef to see if scythe results were reported

7 Possible Next Steps

  • Dataset citations are rare, N of classifications varies widely, need to control for sampling biases https://zenodo.org/record/4730857#.YoaQ2WDMKrM

  • Total number of citations is extremely useful. Ground truth analysis - for a small number of datasets manually search through literature for citations.

  • Do usage metrics (downloads and views) correlate well with citation metrics?

  • Network analysis