The purpose of ELAsTiCC ("Extended LSST Astronomical Time-series Classification Challenge") is to spur the creation and testing of an end-to-end real-time pipeline for time-domain science. The challenge starts with a simulation of ~5 million detected events that includes ~50 million alerts. These alerts will be streamed from LSST to brokers, who will classify the events and send new alerts with classifications back to DESC. A talk about ELAsTiCC given at the LSSTC Enabling Science Broker Workshop in 2021 can be found on YouTube. Two posters on ELAsTiCC given at conferences can be found below on this page.
For discussion or questions about the challenge, use the #elasticc-comms channel on the DESC Slack.
The first ELAsTiCC campaign ran from September 2022 until early January 2023. Metrics and diagnostics from that campaign can be found on the ELAsTiCC page of the DESC TOM (login required).
The Second ELAsTiCC campaign (dubbed ELAsTiCC2) ran from mid-November to mid-December 2023, streaing alerts at ~3× the rate of the first campaign. Diagnostics and some metrics from that campaign can be found on the ELAsTiCC2 page of the DESC TOM (login required).
There is a new github repository for ELAsTiCC-related code and information: LSSTDESC/elasticc.
(Click image for PDF.)
(Click image for PDF.)
For questions, message #elasticc-comms on the DESC Slack.
ELAsTiCC Lead: Gautham Naryan (UIUC)
ELAsTiCC team members: Alex Gagliano (UIUC), Alex Malz (Ruhr-Universitat Bochum), Catarina Alves (University College, London), Deep Chatterjee (UIUC), Emille Ishida (Université Cleremont-Ferrand), Heather Kelly (SLAC), John Franklin Crenshaw (U. Washington), Konstantin Malanchev (UIUC), Laura Salo (UMN), Maria Vincenzi (ICG Portsmouth), Martine Lokken (U. Toronto), Qifeng Cheng (UIUC), Rahul Biswas (Oskar Klein Center), Renée Holžek (U. Toronto), Rick Kessler (U. Chicago), Robert Knop (LBNL), Ved Shah Gautam (UIUC)
Brokers:
The DESC TOM is a web server based on Django and the TOM Toolkit that sent out the simulated ELAsTiCC alerts and collected all of the classifications from the brokers. Some of the data access below (direct web access and API access in documentation and example jupyter scripts) require an account on the TOM. If you do not already have one, contact Rob Knop on the LSST slack at #elasticc-comms.
Some relevant pages on the DESC TOM:
To get access to the database behind the DESC TOM (via web APIs, and directly via SQL), see Accessing classification results and metrics below.
The ELAsTiCC and ELAsTiCC2 data sets each include SNANA simulated photometry of ~4 million transient and variable objects. Some objects (AGN, especially variable stars) are underrepresented, as the focus of ELAsTiCC was photometric identification of different types of transients. The ELAsTiCC2 data set includes ~50 million detections ("sources") and ~400 million photometry points ("forced sources"— some of which are redundant with sources). The simulation was phtometry-level simulation, not a pixel-level simulation, so there is no pixel data, and there is no simulated uncertainty on RA and Dec of detected objects. Host galaxies were simulated, and each object includes zero to two possible hosts.
In most cases, you will want to use the ELAsTiCC2 data set. It uses a more current simulated LSST cadence (baseline 3.2, including a rolling cadence in years 2-3, and including DDF fields), and some models were updated between ELAsTiCC and ELAsTiCC2.
ELAsTiCC and ELAsTiCC2 data are both stored in the database behind the DESC tom; see Accessing classificaiton results and metrics below for information about this.
The original ELAsTiCC data set is available in alert format, which is not the most convenient format for most uses. If you require the SNANA FITS files, we may be able to dig them up, but for most cases just use the ELAsTiCC2 data set which is already in that format.
ELAsTiCC alerts can be found at NERSC:
/global/cfs/cdirs/desc-td/ALERTS/ELASTICC_ALERTS_FINAL
The ELAsTiCC2 data set is available as SNANA FITS files, including HEAD and PHOT files. These may be read as standard FITS tables, but of course to really use them you need to know something about the format that SNANA writes.
ELAsTiCC2 data can be found at NERSC:
/global/cfs/cdirs/desc-td/ELASTICC2
Before each ELAsTiCC campaign, brokers were sent a "training set" of lightcurves to use for training their models. These training sets were not identical in composition to the actual data set; this was intentional, because when the real LSST survey starts, brokers will not have been able to have been trained on data that's identical to the data they will be receivng. (If anything, the training sets in ELAsTiCC were too close to the actual data sets in comparison to anything we'll have before the start of LSST and the first year of LSST data.) The cadence was at least slightly different, and some models were updated between the production of the training set and the actual data sets. The ELAsTiCC2 training set is closer to the final ELAsTiCC2 dataset than was the case for ELAsTiCC.
For your purposes, you may wish to ignore these training sets, and just yourself manually divide the actual ELAsTiCC2 data sets (above) into training and validation sets.
The format of the training set files is outlined in the file A_FORMAT.TXT (found in the same directory as the training set). A log of the models produced by the SNANA simulation is in the file A_MODEL_SUMMARY.TXT.
This Jupyter notebook has a demo of using the ELAsTiCC photo-z quantiles.
The ELAsTiCC2 training sample may be found at https://portal.nersc.gov/cfs/lsst/DESC_TD_PUBLIC/ELASTICC/ELASTICC2_TRAINING_SAMPLE_2 (which is also accessible directly on nersc in directory /global/cfs/cdirs/lsst/www/DESC_TD_PUBLIC/ELASTICC/ELASTICC2_TRAINING_SAMPLE_2. Meta information can be found in the A_FORMAT.TXT and A_MODEL_SUMMARY.TXT files. The training set is available in a few different formats:
Truth tables are available in the database behind the DESC TOM; for more information, see Accessing classification results and metrics. Additionally, they may be found in the files described below.
The following CSV files hold the ELAsTiCC truth tables:
The "OBJECT" truth tables have truth for each object; the column SNID corresponds to the field diaObjectId from the alerts. The "ALERT" truth tables have information for each source (there was one alert for each source); the column SourceID corresponds to the field diaSourceId from the alerts. The object type is in the GENTYPE (for object alerts) or TRUE_GENTYPE (for source alerts). These do not correspond directly to the taxonomy brokers used to classify objects, but are internal types corresponding to SNANA models. The definitions of these types may be found in the file elasticc_origmap.txt in the alert_schema subdirectory of the elasticc GitHub archive.
Broker classifications used the ELAsTiCC Taxonomy (which was different from the ELAsTiCC2 taxonomy!). The following CSV files hold the mapping between SNANA gentype and taxonomy id (they are dumps of tables from the DESC TOM database):
There were some types of objects that were in the ELAsTiCC set that were deliberately not in the training set. These have SNANA GENTYPE 71-74 and 98. 71-74 represent strongly lensed SN Ia/II/Ib/Ic, and 98 is...special. Here is a note by Rick Kessler and Justin Pierel describing the strongly lensed SNe.
In addition to being stored in the DESC TOM database, ELAsTiCC2 truth tables may be found in the *.DUMP files alongside the SNANA FITS files with the ELAsTiCC2 Data Set.
ELAsTiCC2 used a hierarchical classification taxonomy with broad classes and specific classes. In practice, most brokers classified only to specific classes, but some brokers used some of the broad classes, and they were there if somebody needed them. The design of the taxonomy also allows us to group subclasses directly into a broad class by only looking at the first digits of a classification.
The classification taxonomy may be found found on the Jupyter notebook that generated the ids, but is also listed here for convenience:
Alert ├── 0 Meta │ ├── 100 Meta/Other │ ├── 200 Residual │ └── 300 NotClassified ├── 1000 Static │ └── 1100 Static/Other └── 2000 Variable ├── 2100 Variable/Other ├── 2200 Non-Recurring │ ├── 2210 Non-Recurring/Other │ ├── 2220 SN-like │ │ ├── 2221 SN-like/Other │ │ ├── 2222 Ia │ │ ├── 2223 Ib/c │ │ ├── 2224 II │ │ ├── 2225 Iax │ │ └── 2226 91bg │ ├── 2230 Fast │ │ ├── 2231 Fast/Other │ │ ├── 2232 KN │ │ ├── 2233 M-dwarf Flare │ │ ├── 2234 Dwarf Novae │ │ └── 2235 uLens │ └── 2240 Long │ ├── 2241 Long/Other │ ├── 2242 SLSN │ ├── 2243 TDE │ ├── 2244 ILOT │ ├── 2245 CART │ └── 2246 PISN └── 2300 Recurring ├── 2310 Recurring/Other ├── 2320 Periodic │ ├── 2321 Periodic/Other │ ├── 2322 Cepheid │ ├── 2323 RR Lyrae │ ├── 2324 Delta Scuti │ ├── 2325 EB │ └── 2326 LPV/Mira └── 2330 Non-Periodic ├── 2331 Non-Periodic/Other └── 2332 AGN
To connect broker classifications to truth tables, you need the mapping from this taxonomy class ID to the SNANA "gentype". These mappings may be found in the DESC TOM database, but are also provided below:
These are the alert schema used in ELAsTiCC2.
The alert schema can be found in the alert_schema subdirectory of the LSSTDESC/elasticc github repository: https://github.com/LSSTDESC/elasticc/tree/main/alert_schema.
Brokers ingested alerts in the elasticc.v0_9_1.alert.avsc format. (A perusal of the schema will reveal that some of the other schema in that directory are embedded in this.) They issued their classification alerts, which DESC then ingested, in the elasticc.v0_9_1.brokerClassification.avsc schema. The mapping of event type to classId can be found in a Jupyter notebook in the taxonomy subdirectory of the github archive.
All alerts were published without embedded schema on Kafka servers (both to and from brokers). As such, for things to work, everybody hadX to be using the same version of the alerts. (The alert format and schema remained consistent throughout the running of the ELAsTiCC2 campaign.)
The first detection of a transient will not have any forced photometry. The model is that the project will need time to produce that forced photometry.
All detections at least one night later than the first detection will have forced photometry going back to 30 days before the first detection.
For example, suppose object 42 is detected on MJD 60305, 60306, 60310, and 60340:
The alert for the detection on MJD 60305 will only have the source information for that detection.
The alert for the detection on MJD 60306 will have the source information for the detections on 60305 and 60306. It will also have forced photometry for any images taken between MJD 60275 and 60306.
...
You will need an account on the DESC TOM to do this. (See above.)
You can find some metrics by going directly to a couple of TOM pages in your browser:
Because some of these tables are quite large, queries joining them together can be slow. For ELAsTiCC2, broker classifications have been aggregated in ways that are useful for some metrics. For documentation on the aggregation and how to get access to it, see this Jupyter Notebook in the DESC ELAsTiCC metrics github archive.
Technically, this isn't possible, because the PostgreSQL database behind the TOM can't be addressed directly. However, there are a couple of APIs on the TOM that allow you to send SQL that will be run on the PostgreSQL server, and pull back the results. For documentation and examples, see the Jupyter notebook sql_query_tom_db.ipynb in the github DESC TOM repository.