The EJAM package and Shiny app make use of many data objects, including numerous datasets stored in the package’s /data/ folder as well as several large tables stored in a separate repository specifically created for holding those large tables, which contain information on Census blockgroups, Census block internal points, Census block population weights, and EPA FRS facilities.
How to Update Datasets in EJAM
The process begins from within the EJAM code repo, using the various
datacreate_*
scripts to create updated arrow datasets.
Notes and scripts are consolidated in the /data-raw/ folder, and the
starting point is the overarching set of scripts and comments in the
file called
/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
.
That file covers not only the large arrow datasets that are stored in a separate repository, but also many smaller data objects that are installed along with the package in the /data/ folder. Updating all the package’s data objects can be complicated because there are many different data objects of various types and formats and locations.
The various data objects need to be updated at various frequencies –
some only yearly (ACS data) and others when facility IDs and locations
change (as often as possible, as when EPA’s FRS is updated). Some need
to be updated only when the package features/code changes, such as the
important data object called map_headernames
(which in turn
is used to update objects such as names_e
).
See the draft utility EJAM:::datapack()
for a view of
datasets if useful:
x <- EJAM:::datapack()
## Get more info with datapack(simple = FALSE)
##
## ignoring sortbysize because simple=TRUE
x$Item[!grepl("names_|^test", x$Item)]
## [1] "NAICS" "SIC"
## [3] "avg.in.us" "bg_cenpop2020"
## [5] "bgpts" "blockgroupstats"
## [7] "censusplaces" "custom"
## [9] "default_points_shown_at_startup" "ejamdata_version"
## [11] "ejampackages" "ejscreenRESTbroker2table_na_filler"
## [13] "epa_programs" "epa_programs_defined"
## [15] "formulas_all" "formulas_d"
## [17] "frsprogramcodes" "high_pctiles_tied_with_min"
## [19] "islandareas" "lat_alias"
## [21] "lon_alias" "mact_table"
## [23] "map_headernames" "metadata4pins"
## [25] "meters_per_mile" "modelDoaggregate"
## [27] "modelEjamit" "naics_counts"
## [29] "naicstable" "namez"
## [31] "sictable" "stateinfo"
## [33] "stateinfo2" "states_shapefile"
## [35] "statestats" "usastats"
## [37] "x_anyother"
Some notable data files, code details, and other objects that may need to be changed ANNUALLY or more often:
Blockgroup Datasets: These include built-in package datasets
blockgroupstats
,usastats
,statestats
, and the larger tables stored elsewhere:bgej
,bgid2fips
, and several others. They are all created or modified using scripts/functions organized from within/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
. Prior to 2025, several key datasets used by EJAM were obtained from EPA’s EJScreen-related FTP site and others directly from relevant staff. Many of the indicators on the Community Report for the v2.2 (early 2024) EJScreen data were NOT provided in the gdb and csv files on the FTP site, so they had to be obtained directly from the EJScreen team as a separate .csv file. The code referred to from/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
assumes the basic datasets from EPA are available, and then converts them into the datasets actually used by EJAM. However, if those are no longer available from those sources, the data could mostly be independently recreated, but this would require a combination of existing code and significant new work. Some relevant code was in archived EJScreen repositories, for creating environmental datasets. Some code is in EJAM functions that can get ACS datasets. Much relevant code was in an older non-EPA package called ejscreen, which had been made private as of early 2025 but could be refreshed. That package had tools such as ejscreen.create() that had been able to reproduce parts of the blockgroupstats and usastats/statestats datasets.The block (not block group) tables might be updated less often, but Census fips codes do change yearly so the
blockwts
,blockpoints
,quaddata
,blockid2fips
, and related additional data.tables should be updated as needed. This is also done from within/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
See packagecensus2020download
on github for the functioncensus2020_get_data()
that may be useful.Locations of EPA facilities and their NAICS/SIC/MACT/program information may need frequent updates, ideally, since facilities open, close, relocate, or have their information corrected or otherwise updated. EPA’s FRS is the source for much of this information stored in tables EJAM uses, such as these:
frs
,frs_by_programid
,frs_by_naics
,frs_by_sic
, andfrs_by_mact
,NAICS
,SIC
,naics_counts
,naicstable
,SIC
,sictable
,mact_table
, andepa_programs
,frsprogramcodes
, etc. Again these are updated from within/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
All other updates of data objects and their documentation are organized within
/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
Documentation of datasets via .R files is generally handled by the scripts creating/updating the datasets.map_headernames
and associated .xlsx, etc. are critical. This needs to updated especially if indicator names change or are added, for example.map_headernames
holds most of the useful metadata about each variable (each indicator, like %low income) – e.g., how many digits to use in rounding, units, a long name of indicator, the type or category of indicator, sort order to use in reports, method of calculating aggregations of the indicator over blockgroups, etc. This is modified directly in the spreadsheet map_headernames, and then functions/scripts read that .xlsx to create the map_headernames dataset. See/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
Test data (inputs) and examples of outputs may have to be updated (every time parameters change & when outputs returned change). Those are generated by scripts/functions referred to from
/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
A default year is used in various functions, such as for the last year of the 5-year ACS dataset. These defaults like yr or year should be updated via global searches where relevant.
metadata about vintage/ version is in attributes of most datasets. That is updated via scripts/functions used by
/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
via for example the helpersmetadata_add()
andmetadata_check()
and metadata_mapping.R andVersion Numbering is recorded in the DESCRIPTION file primarily, and note use of the ejamdata_version.txt file, and tags on releases, and the NEWS file.
Other Documentation - updates may be needed for the README, vignettes, and possibly examples in some functions in case updates to datasets alter how the examples would work.
Again, all of those updates should be done starting from an
understanding of the file called
/data-raw/datacreate_0_UPDATE_ALL_DATASETS.R
. That script
includes steps to update metadata and documentation and save new
versions of data in the data folder if appropriate.
The information below focuses on the other type of data objects – the set of large arrow files that are stored outside the package code repository.
Repository that stores the large arrow files
The large mostly-Census-related tables are not installed as part of
the R package in the typical /data/ folder that contains .rda files
lazyloaded by the package. Instead, they are kept in a separate github
repository that we refer to here as the data repository. The current
(either installed or loaded source version) of that repository is
desc::desc(file = system.file("DESCRIPTION", package = "EJAM"))$get("ejam_data_repo")
IMPORTANT: The name of the data repository (as distinct from the package code repository) must be recorded/ updated in the EJAM package DESCRIPTION file, so that the package will know where to look for the data files if the datasets were moved to a new repository, for examples.
Census-related arrow files
To store the large files needed by the EJAM package, we use the
Apache arrow file format through the {arrow} R package, with file
extension .arrow
. This allows us to work with
larger-than-memory data and store it outside of the EJAM package
itself.
The names of these tables should be listed in a file called
R/arrow_ds_names.R
and the global variable called
.arrow_ds_names that is used by functions like
dataload_dynamic()
and
dataload_from_local()
.
As of mid-2025, there were 11 arrow files used by EJAM:
-
bgid2fips
.arrow: crosswalk of EJAM blockgroup IDs (1-n) with 12-digit blockgroup FIPS codes -
blockid2fips
.arrow: crosswalk of EJAM block IDs (1-n) with 15-digit block FIPS codes -
blockpoints
.arrow: Census block internal points lat-lon coordinates, EJAM block ID -
blockwts
.arrow: Census block population weight as share of blockgroup population, EJAM block and blockgroup ID -
bgej
.arrow: blockgroup-level statistics of EJ variables -
quaddata
.arrow: 3D spherical coordinates of Census block internal points, with EJAM block ID
FRS-related arrow files
frs
.arrow: data.table of EPA Facility Registry Service (FRS) regulated sitesfrs_by_naics
.arrow: data.table of NAICS industry code(s) for each EPA-regulated site in Facility Registry Servicefrs_by_sic
.arrow: data.table of SIC industry code(s) for each EPA-regulated site in Facility Registry Servicefrs_by_programid.arrow
: data.table of Program System ID code(s) for each EPA-regulated site in the Facility Registry Servicefrs_by_mact
.arrow: data.table of MACT NESHAP subpart(s) that each EPA-regulated site is subject to
This document outlines how we will operationalize EJAM’s download, management, and in-app loading of these arrow datasets.
Below is a description of a workable, default operationalization, followed by options for potential improvements via automation and processing efficiency.
Development/Setup
The arrow files are stored in a separate, public, Git-LFS-enabled GitHub repo (henceforth ‘ejamdata’). The owner/reponame must be recorded/updated in the DESCRIPTION file field called ejam_data_repo – that info is used by the package.
Then, and any time the arrow datasets are updated, we update the ejamdata release version via the
.github/push_to_ejam.yaml
workflow in the ejamdata repo, thereby saving the arrow files with the release, to be downloaded automatically by EJAM-
EJAM’s
download_latest_arrow_data()
function does the following:- Checks ejamdata repo’s latest release/version.
- Checks user’s EJAM package’s ejamdata version, which is stored in
data/ejamdata_version.txt
.- If the
data/ejamdata_version.txt
file doesn’t exist, e.g. if it’s the first time installing EJAM, it will be created at the end of the script.
- If the
- If the versions are different, download the latest arrow from the
latest ejamdata release with
piggyback::pb_download()
. see how this function works for details:
- We add a call to this function in the onAttach script (via the
dataload_dynamic()
function) so it runs and ensures the latest arrow files are downloaded when user loads EJAM.
How it Works for the User
- User installs EJAM
-
devtools::install_github("USEPA/EJAM-open")
(or as adjusted depending on the actual repository owner and name)
-
- User loads EJAM as usual
-
library(EJAM)
. This will trigger the newdownload_latest_arrow_data()
function.
-
- User runs EJAM as usual
- The
dataload_dynamic()
function will work as usual because the data are now stored in thedata
directory.
- The
How new versions of arrow datasets are republished/ released
The key arrow files are updated from within the EJAM code repository, as explained above.
Those files were then being copied into a clone of the ejamdata repo before being pushed to the actual ejamdata repo on github (at USEPA/ejamdata)
This triggers ejamdata’s
push_to_ejam.yaml
workflow that increments the latest release tag reflecting the new version and creates a new release
Potential Improvements
Making Code more Arrow-Friendly
Problem: loading the data as tibbles/dataframes takes a long time
Solution: We may be able to modify our code to be more arrow
-friendly. This essentially keeps the analysis code as a sort of query,
and only actually loads the results into memory when requested (via
collect()
) This dramatically reduces used memory, which
would speed up processing times and avoid potential crashes resulting
from not enough memory. However, this would require a decent lift to
update the code in all places
Pros: processing efficiency, significantly reduced memory usage
Implementation: This has been mostly implemented by the
dataload_dynamic()
function, which contains a
return_data_table
parameter. If FALSE
, the
arrow file is loaded as an .arrow dataset, rather than a
tibble/dataframe.