Introduction

This lesson is a brief introduction to TOAST and its data representations. This next cell is just initializing some things for the notebook.

In [1]:
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
    fake_focalplane
)

# Capture C++ output in the jupyter cells
%load_ext wurlitzer

Runtime Environment

You can get the current TOAST runtime configuration from the "Environment" class.

In [2]:
import toast

env = toast.Environment.get()
print(env)
<toast.Environment
  Source code version = 2.3.5.dev1474
  Logging level = INFO
  Handling enabled for 0 signals:
  Max threads = 12
  MPI build enabled
  MPI runtime enabled
>

Data Model

Before using TOAST for simulation or analysis, it is important to discuss how data is stored in memory and how that data can be distributed among many processes to parallelize large workflows.

First, let's create a fake focalplane of detectors to use throughout this example.

In [3]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

# Generate a fake focalplane with 7 pixels, each with 2 detectors.

fp = fake_focalplane()
In [4]:
# Make a plot of this focalplane layout.

detnames = list(sorted(fp.keys()))
detquat = {x: fp[x]["quat"] for x in detnames}
detfwhm = {x: fp[x]["fwhm_arcmin"] for x in detnames}
detlabels = {x: x for x in detnames}
detpolcol = {x: "red" if i % 2 == 0 else "blue" for i, x in enumerate(detnames)}

toast.tod.plot_focalplane(
    detquat, 4.0, 4.0, None, fwhm=detfwhm, polcolor=detpolcol, labels=detlabels
)
Out[4]:

Observations with Time Ordered Data

TOAST works with data organized into observations. Each observation is independent of any other observation. An observation consists of co-sampled detectors for some span of time. The intrinsic detector noise is assumed to be stationary within an observation. Typically there are other quantities which are constant for an observation (e.g. elevation, weather conditions, satellite spin axis, etc).

An observation is just a dictionary with at least one member ("tod") which is an instance of a class that derives from the toast.TOD base class.

The inputs to a TOD class constructor are at least:

  1. The detector names for the observation.
  2. The number of samples in the observation.
  3. The geometric offset of the detectors from the boresight.
  4. Information about how detectors and samples are distributed among processes. More on this below.

The TOD class can act as a storage container for different "flavors" of timestreams as well as a source and sink for the observation data (with the read*() and write*() methods):

In [5]:
import toast.qarray as qa

nsamples = 1000

obs = dict()
obs["name"] = "20191014_000"
In [6]:
# The type of TOD class is usually specific to the data processing job.
# For example it might be one of the simulation classes or it might be
# a class that loads experiment data.  Here we just use a simple class
# that is only used for testing and which reads / writes data to internal memory
# buffers.

tod = toast.tod.TODCache(None, detnames, nsamples, detquats=detquat)
obs["tod"] = tod
In [7]:
# Print the tod to get summary info:
print(tod)
<TODCache
  14 total detectors and 1000 total samples
  Using MPI communicator None
    In grid dimensions 1 sample ranks x 1 detranks
  Process at (0, 0) in grid has data for:
    Samples 0 - 999 (inclusive)
    Detectors:
      0A
      0B
      1A
      1B
      2A
      2B
      3A
      3B
      4A
      4B
      5A
      5B
      6A
      6B
    Cache contains 0 bytes
>
In [8]:
# The TOD class has methods to get information about the data:

print("TOD has detectors {}".format(", ".join(tod.detectors)))
print("TOD has {} total samples for each detector".format(tod.total_samples))
TOD has detectors 0A, 0B, 1A, 1B, 2A, 2B, 3A, 3B, 4A, 4B, 5A, 5B, 6A, 6B
TOD has 1000 total samples for each detector
In [9]:
# Write some data.  Not every TOD derived class supports writing (for example,
# TOD classes that represent simulations).

def fill_tod(tod, fp):
    detnames = tod.detectors
    t_delta = 1.0 / fp[detnames[0]]["rate"]
    tod.write_times(stamps=np.arange(0.0, nsamples * t_delta, t_delta))
    tod.write_boresight(
        data=qa.from_angles(
            (np.pi / 2) * np.ones(nsamples),
            (2 * np.pi / nsamples) * np.arange(nsamples),
            np.zeros(nsamples)
        )
    )
    tod.write_position(pos=np.zeros((nsamples, 3), dtype=np.float64))
    tod.write_velocity(vel=np.zeros((nsamples, 3), dtype=np.float64))
    tod.write_common_flags(flags=np.zeros(nsamples, dtype=np.uint8))
    for d in detnames:
        tod.write(
            detector=d, data=np.random.normal(
                scale=fp[d]["NET"], 
                size=nsamples
            )
        )
        tod.write_flags(
            detector=d, flags=np.zeros(nsamples, dtype=np.uint8)
        )
In [10]:
fill_tod(tod, fp)
In [11]:
# Read it back

print("TOD timestamps = {} ...".format(tod.read_times()[:5]))
TOD timestamps = [0.   0.05 0.1  0.15 0.2 ] ...
In [12]:
print("TOD boresight = \n{} ...".format(tod.read_boresight()[:5,:]))
TOD boresight = 
[[ 7.07106781e-01  0.00000000e+00  7.07106781e-01  1.11022302e-16]
 [ 7.07103292e-01  2.22143781e-03  7.07103292e-01 -2.22143781e-03]
 [ 7.07092824e-01  4.44285371e-03  7.07092824e-01 -4.44285371e-03]
 [ 7.07075377e-01  6.66422575e-03  7.07075377e-01 -6.66422575e-03]
 [ 7.07050951e-01  8.88553201e-03  7.07050951e-01 -8.88553201e-03]] ...
In [13]:
for d in detnames:
    print("TOD detector {} = {} ...".format(
        d, tod.read(detector=d, n=5))
    )
    print("TOD detector {} flags = {} ...".format(
        d, tod.read_flags(detector=d, n=5))
    )
TOD detector 0A = [ 0.25759254 -0.54706632  1.14506302 -0.74782047  0.57195886] ...
TOD detector 0A flags = [0 0 0 0 0] ...
TOD detector 0B = [-0.88420186  0.68409191  1.20705825 -1.8688857   0.71436997] ...
TOD detector 0B flags = [0 0 0 0 0] ...
TOD detector 1A = [-1.24960589 -2.08648791 -1.27825076 -0.06485328 -0.72203392] ...
TOD detector 1A flags = [0 0 0 0 0] ...
TOD detector 1B = [-0.36498844  0.83760842  0.38315303 -0.34312755 -0.63002003] ...
TOD detector 1B flags = [0 0 0 0 0] ...
TOD detector 2A = [-0.24636745 -1.45772557 -1.44644602  1.13376719 -1.94021508] ...
TOD detector 2A flags = [0 0 0 0 0] ...
TOD detector 2B = [1.30544418 0.98626578 1.79204536 0.25183969 0.41058177] ...
TOD detector 2B flags = [0 0 0 0 0] ...
TOD detector 3A = [ 0.89089911 -0.45624574 -0.71167861  0.74042559 -1.32176285] ...
TOD detector 3A flags = [0 0 0 0 0] ...
TOD detector 3B = [ 0.23375497 -1.41980703  0.88947488  0.76697716  1.13937909] ...
TOD detector 3B flags = [0 0 0 0 0] ...
TOD detector 4A = [ 1.56621173  2.19891933  0.13616792  0.81018787 -0.71313832] ...
TOD detector 4A flags = [0 0 0 0 0] ...
TOD detector 4B = [-0.35760456 -1.3144734  -1.8654363   1.50333755  2.85883701] ...
TOD detector 4B flags = [0 0 0 0 0] ...
TOD detector 5A = [-0.62546702  0.12674574 -0.80354775  0.42147378 -0.42494111] ...
TOD detector 5A flags = [0 0 0 0 0] ...
TOD detector 5B = [ 0.67391657 -0.7240716   1.02939838  0.55240429  0.82689507] ...
TOD detector 5B flags = [0 0 0 0 0] ...
TOD detector 6A = [-1.47926349 -0.57201472  0.78990574  0.29260365 -2.97926406] ...
TOD detector 6A flags = [0 0 0 0 0] ...
TOD detector 6B = [ 0.8290093   0.87036621 -0.89911525 -1.17921671 -0.24391669] ...
TOD detector 6B flags = [0 0 0 0 0] ...
In [14]:
# Store some data in the cache.  The "cache" member variable looks like a dictionary of
# numpy arrays, but the memory used is allocated in C, so that we can actually clear
# these buffers when needed.

for d in detnames:
    processed = tod.read(detector=d)
    processed /= 2.0
    # By convention, we usually name buffers in the cache by <prefix>_<detector>
    tod.cache.put("processed_{}".format(d), processed)
print("TOD cache now contains {} bytes".format(tod.cache.report(silent=True)))
TOD cache now contains 327000 bytes

One common use pattern is to "read and cache" data. This happens if we want to keep the data in memory to re-use later. The TOD class has a set of methods that start with the string "local_" that perform this action.

In [15]:
# Get data from cache or read and cache

print("TOD timestamps = {} ...".format(tod.local_times()[:2]))
for d in detnames:
    print("TOD detector {} = {} ...".format(
        d, tod.local_signal(d)[:2])
    )
    print("TOD detector {} pointing = {} ...".format(
        d, tod.local_pointing(d)[:2,:])
    )
    print("TOD detector {} flags = {} ...".format(
        d, tod.local_flags(d)[:2])
    )
TOD timestamps = [0.   0.05] ...
TOD detector 0A = [ 0.12879627 -0.27353316] ...
TOD detector 0A pointing = [[ 0.65328148 -0.27059805  0.65328148 -0.27059805]
 [ 0.65412837 -0.26854437  0.65242815 -0.27264906]] ...
TOD detector 0A flags = [0 0] ...
TOD detector 0B = [-0.44210093  0.34204596] ...
TOD detector 0B pointing = [[ 0.27059805 -0.65328148  0.27059805 -0.65328148]
 [ 0.27264906 -0.65242815  0.26854437 -0.65412837]] ...
TOD detector 0B flags = [0 0] ...
TOD detector 1A = [-0.62480294 -1.04324396] ...
TOD detector 1A pointing = [[ 6.99945726e-01  1.90976666e-18  7.14196038e-01  1.10225615e-16]
 [ 6.99942272e-01  2.19894073e-03  7.14192514e-01 -2.24370934e-03]] ...
TOD detector 1A flags = [0 0] ...
TOD detector 1B = [-0.18249422  0.41880421] ...
TOD detector 1B pointing = [[ 0.49493637 -0.49493637  0.50501286 -0.50501286]
 [ 0.49648881 -0.49337904  0.50342383 -0.50659691]] ...
TOD detector 1B flags = [0 0] ...
TOD detector 2A = [-0.12318372 -0.72886279] ...
TOD detector 2A pointing = [[ 0.64759555 -0.27492183  0.65890108 -0.26624679]
 [ 0.64845604 -0.272886    0.65806139 -0.26831547]] ...
TOD detector 2A flags = [0 0] ...
TOD detector 2B = [0.65272209 0.49313289] ...
TOD detector 2B pointing = [[ 0.26352011 -0.6523183   0.27764851 -0.65417834]
 [ 0.26556813 -0.65148721  0.27559198 -0.65504736]] ...
TOD detector 2B flags = [0 0] ...
TOD detector 3A = [ 0.44544955 -0.22812287] ...
TOD detector 3A pointing = [[ 0.71063346 -0.00617057  0.7035083   0.00617057]
 [ 0.71064934 -0.00393802  0.70352422  0.0039604 ]] ...
TOD detector 3A flags = [0 0] ...
TOD detector 3B = [ 0.11687748 -0.70990351] ...
TOD detector 3B pointing = [[ 0.49813049 -0.50685699  0.50181874 -0.49309224]
 [ 0.49972037 -0.50528957  0.50026717 -0.49466632]] ...
TOD detector 3B flags = [0 0] ...
TOD detector 4A = [0.78310587 1.09945966] ...
TOD detector 4A pointing = [[ 7.14196038e-01 -1.99129539e-18  6.99945726e-01  1.11889246e-16]
 [ 7.14192514e-01  2.24370934e-03  6.99942272e-01 -2.19894073e-03]] ...
TOD detector 4A flags = [0 0] ...
TOD detector 4B = [-0.17880228 -0.6572367 ] ...
TOD detector 4B pointing = [[ 0.50501286 -0.50501286  0.49493637 -0.49493637]
 [ 0.50659691 -0.50342383  0.49337904 -0.49648881]] ...
TOD detector 4B flags = [0 0] ...
TOD detector 5A = [-0.31273351  0.06337287] ...
TOD detector 5A pointing = [[ 0.71063346  0.00617057  0.7035083  -0.00617057]
 [ 0.71061057  0.00840305  0.70348545 -0.00838067]] ...
TOD detector 5A flags = [0 0] ...
TOD detector 5B = [ 0.33695828 -0.3620358 ] ...
TOD detector 5B pointing = [[ 0.50685699 -0.49813049  0.49309224 -0.50181874]
 [ 0.50841941 -0.4965357   0.4915133  -0.50336536]] ...
TOD detector 5B flags = [0 0] ...
TOD detector 6A = [-0.73963174 -0.28600736] ...
TOD detector 6A pointing = [[ 0.6523183  -0.26352011  0.65417834 -0.27764851]
 [ 0.65314295 -0.2614695   0.65330285 -0.2797023 ]] ...
TOD detector 6A flags = [0 0] ...
TOD detector 6B = [0.41450465 0.43518311] ...
TOD detector 6B pointing = [[ 0.27492183 -0.64759555  0.26624679 -0.65890108]
 [ 0.27695495 -0.64672866  0.26417548 -0.65973427]] ...
TOD detector 6B flags = [0 0] ...

Comm : Groups of Processes

A toast.Comm instance takes the global number of processes available (MPI.COMM_WORLD) and divides them into groups. Each process group is assigned one or more observations. Since observations are independent, this means that different groups can be independently working on separate observations in parallel. It also means that inter-process communication needed when working on a single observation can occur with a smaller set of processes.

At NERSC, this notebook is running on a login node, so we cannot use MPI. Constructing a default toast.Comm whenever MPI use is disabled will just produce a single group of one process. See the parallel example at the end of this notebook for a case with multiple groups.

In [16]:
comm = toast.Comm()
print(comm)
<toast.Comm
  World MPI communicator = <mpi4py.MPI.Intracomm object at 0x7f8a5e9d8bb0>
  World MPI size = 1
  World MPI rank = 0
  Group MPI communicator = <mpi4py.MPI.Intracomm object at 0x7f8a5e9d8bb0>
  Group MPI size = 1
  Group MPI rank = 0
  Rank MPI communicator = <mpi4py.MPI.Intracomm object at 0x7f8a5e9d8b90>
>

Data : a Collection of Observations

A toast.Data instance is mainly just a list of observations. However remember that each process group will have a different set of observations. Since we have only one group of one process, this example is not so interesting. See the MPI example.

In [17]:
data = toast.Data(comm)
data.obs.append(obs)

Data Distribution

Recapping previous sections, we have some groups of processes, each of which has a set of observations. Within a single process group, the detector data is distributed across the processes within the group. That distribution is controlled by the size of the communicator passed to the TOD class, and also by the detranks parameter of the constructor. This detranks number sets the dimension of the process grid in the detector direction. For example, a value of "1" means that every process has all detectors for some span of time. A value equal to the size of the communicator results in every process having some number of detectors for the entire observation. The detranks parameter must divide evenly into the number of processes in the communicator and determines how the processes are arranged in a grid.

As a concrete example, imagine that MPI.COMM_WORLD has 4 processes. We split this into 2 groups of 2 procesess. There are 3 observations of varying lengths and every group has one or 2 observations. Here is the starting point of our data distribution:

In [ ]:
 

Next we split the processes into 2 groups

In [ ]:
 

Then we assign our observations to the two groups

In [ ]:
 

When we create the TOD class in each observation, we specify how the the data is distributed within each observation. If the detranks parameter is "1", then the dimension of the process grid in the detector direction is one.

In [ ]:
 

If detranks is set to the size of the group, then we get:

Working with Data

Once we have our distributed data set up, we usually feed this through a pipeline. There will be a lesson on pipelines later. Here we will create an entire fake dataset and work with it. The MPI introduction notebook will go into more details about working with distributed data.

In [18]:
comm = toast.Comm()
In [19]:
data = toast.Data(comm)
In [20]:
# Create 3 observations, each containing an TODCache class.  We'll
# use the same focalplane and number of samples for each observation,
# but this is not required- each observation is independent.

for i in range(3):
    obsname = "observation_{:02d}".format(i)
    obs = dict()
    obs["name"] = obsname
    obs["id"] = "{:02d}".format(i)
    obs["tod"] = toast.tod.TODCache(
        comm.comm_group, detnames, nsamples, detquats=detquat
    )
    fill_tod(obs["tod"], fp)
    data.obs.append(obs)
In [21]:
# What does our distributed data look like now?
data.info()
Data distributed over 1 processes in 1 groups

observation 02:
  key id
  key name
  key tod
  1000 total samples, 14 detectors
  proc 0
    sample range 0 --> 999 in 1 chunks:
      0 --> 999
    timestamps 0.0 --> 49.95
    det 0A:
      pntg [6.533e-01 -2.706e-01 6.533e-01 -2.706e-01] --> [-6.524e-01 2.726e-01 -6.541e-01 2.685e-01]
      -1.073e-01 (0) --> -6.532e-01 (0)
        1000 good samples
        min = -2.8418e+00, max = 2.8013e+00, mean = 4.9750e-03, rms = 9.6425e-01
    det 0B:
      pntg [2.706e-01 -6.533e-01 2.706e-01 -6.533e-01] --> [-2.685e-01 6.541e-01 -2.726e-01 6.524e-01]
      -5.715e-01 (0) --> -7.082e-01 (0)
        1000 good samples
        min = -3.2995e+00, max = 3.3592e+00, mean = -3.8647e-02, rms = 1.0071e+00
    det 1A:
      pntg [6.999e-01 1.910e-18 7.142e-01 1.102e-16] --> [-6.999e-01 2.199e-03 -7.142e-01 -2.244e-03]
      2.246e-01 (0) --> 1.072e-01 (0)
        1000 good samples
        min = -3.0817e+00, max = 2.8059e+00, mean = 3.5820e-04, rms = 1.0081e+00
    det 1B:
      pntg [4.949e-01 -4.949e-01 5.050e-01 -5.050e-01] --> [-4.934e-01 4.965e-01 -5.066e-01 5.034e-01]
      -4.524e-01 (0) --> -2.024e+00 (0)
        1000 good samples
        min = -3.5640e+00, max = 3.0530e+00, mean = 1.0675e-02, rms = 9.6855e-01
    det 2A:
      pntg [6.476e-01 -2.749e-01 6.589e-01 -2.662e-01] --> [-6.467e-01 2.770e-01 -6.597e-01 2.642e-01]
      -1.843e+00 (0) --> 3.087e-01 (0)
        1000 good samples
        min = -3.3445e+00, max = 2.9321e+00, mean = -3.8562e-02, rms = 9.8488e-01
    det 2B:
      pntg [2.635e-01 -6.523e-01 2.776e-01 -6.542e-01] --> [-2.615e-01 6.531e-01 -2.797e-01 6.533e-01]
      7.663e-01 (0) --> -3.442e-01 (0)
        1000 good samples
        min = -3.3714e+00, max = 2.9475e+00, mean = -2.0800e-02, rms = 1.0236e+00
    det 3A:
      pntg [7.106e-01 -6.171e-03 7.035e-01 6.171e-03] --> [-7.106e-01 8.403e-03 -7.035e-01 -8.381e-03]
      -1.756e-01 (0) --> 5.364e-01 (0)
        1000 good samples
        min = -2.9836e+00, max = 3.2346e+00, mean = 2.7732e-02, rms = 9.8296e-01
    det 3B:
      pntg [4.981e-01 -5.069e-01 5.018e-01 -4.931e-01] --> [-4.965e-01 5.084e-01 -5.034e-01 4.915e-01]
      -6.060e-02 (0) --> 1.605e-01 (0)
        1000 good samples
        min = -3.0320e+00, max = 4.3120e+00, mean = -1.7300e-02, rms = 1.0184e+00
    det 4A:
      pntg [7.142e-01 -1.991e-18 6.999e-01 1.119e-16] --> [-7.142e-01 2.244e-03 -6.999e-01 -2.199e-03]
      -1.548e+00 (0) --> 1.508e+00 (0)
        1000 good samples
        min = -2.8804e+00, max = 3.7006e+00, mean = 7.1076e-02, rms = 9.6148e-01
    det 4B:
      pntg [5.050e-01 -5.050e-01 4.949e-01 -4.949e-01] --> [-5.034e-01 5.066e-01 -4.965e-01 4.934e-01]
      -3.538e-01 (0) --> -7.723e-02 (0)
        1000 good samples
        min = -2.6566e+00, max = 3.6116e+00, mean = 8.8672e-03, rms = 9.8317e-01
    det 5A:
      pntg [7.106e-01 6.171e-03 7.035e-01 -6.171e-03] --> [-7.106e-01 -3.938e-03 -7.035e-01 3.960e-03]
      7.894e-01 (0) --> 1.027e+00 (0)
        1000 good samples
        min = -2.9491e+00, max = 2.8820e+00, mean = 8.0623e-03, rms = 9.9094e-01
    det 5B:
      pntg [5.069e-01 -4.981e-01 4.931e-01 -5.018e-01] --> [-5.053e-01 4.997e-01 -4.947e-01 5.003e-01]
      1.479e-01 (0) --> -3.724e-02 (0)
        1000 good samples
        min = -3.1439e+00, max = 2.7848e+00, mean = -9.1418e-03, rms = 9.9075e-01
    det 6A:
      pntg [6.523e-01 -2.635e-01 6.542e-01 -2.776e-01] --> [-6.515e-01 2.656e-01 -6.550e-01 2.756e-01]
      4.832e-01 (0) --> 6.188e-01 (0)
        1000 good samples
        min = -3.1286e+00, max = 3.4804e+00, mean = 7.6529e-03, rms = 9.9506e-01
    det 6B:
      pntg [2.749e-01 -6.476e-01 2.662e-01 -6.589e-01] --> [-2.729e-01 6.485e-01 -2.683e-01 6.581e-01]
      -4.357e-01 (0) --> 6.479e-01 (0)
        1000 good samples
        min = -2.8760e+00, max = 2.7913e+00, mean = 1.4102e-02, rms = 9.9937e-01
    cache common_flags:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_0A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_0B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_1A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_1B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_2A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_2B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_3A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_3B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_4A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_4B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_5A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_5B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_6A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_6B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache quat_0A:
        min = -7.0711e-01, max = 7.0711e-01, mean = 1.9134e-04, rms = 5.0000e-01
    cache quat_0B:
        min = -7.0711e-01, max = 7.0711e-01, mean = -1.9134e-04, rms = 5.0000e-01
    cache quat_1A:
        min = -7.1420e-01, max = 7.1420e-01, mean = -1.9145e-03, rms = 5.0000e-01
    cache quat_1B:
        min = -7.1420e-01, max = 6.9995e-01, mean = -3.2074e-03, rms = 4.9999e-01
    cache quat_2A:
        min = -7.1066e-01, max = 7.0354e-01, mean = -2.2732e-04, rms = 5.0000e-01
    cache quat_2B:
        min = -7.1066e-01, max = 7.0354e-01, mean = -2.7360e-03, rms = 4.9999e-01
    cache quat_3A:
        min = -7.1061e-01, max = 7.1066e-01, mean = 3.4517e-03, rms = 4.9999e-01
    cache quat_3B:
        min = -7.0354e-01, max = 7.1066e-01, mean = 1.6037e-03, rms = 5.0000e-01
    cache quat_4A:
        min = -7.1419e-01, max = 7.1420e-01, mean = 2.6215e-03, rms = 4.9999e-01
    cache quat_4B:
        min = -6.9995e-01, max = 7.1420e-01, mean = 3.2074e-03, rms = 4.9999e-01
    cache quat_5A:
        min = -7.1066e-01, max = 7.1066e-01, mean = -4.7661e-04, rms = 5.0000e-01
    cache quat_5B:
        min = -7.0354e-01, max = 7.1066e-01, mean = 1.6037e-03, rms = 5.0000e-01
    cache quat_6A:
        min = -7.1066e-01, max = 7.0354e-01, mean = -2.3533e-03, rms = 4.9999e-01
    cache quat_6B:
        min = -7.1066e-01, max = 7.0354e-01, mean = -6.0999e-04, rms = 5.0000e-01
    cache signal_0A:
        min = -2.8418e+00, max = 2.8013e+00, mean = 4.9750e-03, rms = 9.6425e-01
    cache signal_0B:
        min = -3.2995e+00, max = 3.3592e+00, mean = -3.8647e-02, rms = 1.0071e+00
    cache signal_1A:
        min = -3.0817e+00, max = 2.8059e+00, mean = 3.5820e-04, rms = 1.0081e+00
    cache signal_1B:
        min = -3.5640e+00, max = 3.0530e+00, mean = 1.0675e-02, rms = 9.6855e-01
    cache signal_2A:
        min = -3.3445e+00, max = 2.9321e+00, mean = -3.8562e-02, rms = 9.8488e-01
    cache signal_2B:
        min = -3.3714e+00, max = 2.9475e+00, mean = -2.0800e-02, rms = 1.0236e+00
    cache signal_3A:
        min = -2.9836e+00, max = 3.2346e+00, mean = 2.7732e-02, rms = 9.8296e-01
    cache signal_3B:
        min = -3.0320e+00, max = 4.3120e+00, mean = -1.7300e-02, rms = 1.0184e+00
    cache signal_4A:
        min = -2.8804e+00, max = 3.7006e+00, mean = 7.1076e-02, rms = 9.6148e-01
    cache signal_4B:
        min = -2.6566e+00, max = 3.6116e+00, mean = 8.8672e-03, rms = 9.8317e-01
    cache signal_5A:
        min = -2.9491e+00, max = 2.8820e+00, mean = 8.0623e-03, rms = 9.9094e-01
    cache signal_5B:
        min = -3.1439e+00, max = 2.7848e+00, mean = -9.1418e-03, rms = 9.9075e-01
    cache signal_6A:
        min = -3.1286e+00, max = 3.4804e+00, mean = 7.6529e-03, rms = 9.9506e-01
    cache signal_6B:
        min = -2.8760e+00, max = 2.7913e+00, mean = 1.4102e-02, rms = 9.9937e-01
    cache timestamps:
        min = 0.0000e+00, max = 4.9950e+01, mean = 2.4975e+01, rms = 1.4434e+01
    cache toast_boresight:
        min = -7.0711e-01, max = 7.0711e-01, mean = 3.5355e-04, rms = 5.0000e-01
    cache toast_tod_pos:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache toast_tod_vel:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00

Here we see a dump of the distributed data, and in this case the TODCache class is storing stuff "under the hood", which is why data shows up in the dump of the cache as well as when calling the normal TOD access methods.

Next, we can dump this data to a TIDAS volume (directories of HDF5 files in this case).

In [22]:
import os
import shutil

if toast.tod.tidas_available:
    import toast.tod.tidas as tt
    datapath = "intro_data"
    if os.path.isdir(datapath):
        shutil.rmtree(datapath)
    exporter = tt.OpTidasExport(
        datapath,
        tt.TODTidas,
        backend="hdf5",
        comp="none",
        use_todchunks=True,
    )
    exporter.exec(data)
TOAST INFO: TIDAS:  Check path / create volume:  0.08 seconds (1 calls)
TOAST INFO: TIDAS:  world open volume:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Create obs groups and intervals:  0.05 seconds (1 calls)
TOAST INFO: TIDAS:  World volume sync:  0.02 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_00 chunk intervals:  0.01 seconds (1 calls)
TOAST INFO: TIDAS:  Group compute observation_00 sampsizes:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group instantiate observation_00 TOD:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group check TOD observation_00 process grid:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_00 timestamps and boresight:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_00 data:  0.01 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_01 chunk intervals:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group compute observation_01 sampsizes:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group instantiate observation_01 TOD:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group check TOD observation_01 process grid:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_01 timestamps and boresight:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_01 data:  0.01 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_02 chunk intervals:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group compute observation_02 sampsizes:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group instantiate observation_02 TOD:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group check TOD observation_02 process grid:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_02 timestamps and boresight:  0.00 seconds (1 calls)
TOAST INFO: TIDAS:  Group write observation_02 data:  0.01 seconds (1 calls)

And then we can load it back in...

In [23]:
if toast.tod.tidas_available:
    data = tt.load_tidas(
        comm,
        comm.group_size,
        datapath,
        "r",
        "detectors",
        tt.TODTidas,
        distintervals="chunks",
        group_dets="detectors",
    )
data.info()
Data distributed over 1 processes in 1 groups

observation 02:
  key id
  key name
  key tod
  1000 total samples, 14 detectors
  proc 0
    sample range 0 --> 999 in 1 chunks:
      0 --> 999
    timestamps 0.0 --> 49.95
    det 0A:
      pntg [6.533e-01 -2.706e-01 6.533e-01 -2.706e-01] --> [-6.524e-01 2.726e-01 -6.541e-01 2.685e-01]
      -1.073e-01 (0) --> -6.532e-01 (0)
        1000 good samples
        min = -2.8418e+00, max = 2.8013e+00, mean = 4.9750e-03, rms = 9.6425e-01
    det 0B:
      pntg [2.706e-01 -6.533e-01 2.706e-01 -6.533e-01] --> [-2.685e-01 6.541e-01 -2.726e-01 6.524e-01]
      -5.715e-01 (0) --> -7.082e-01 (0)
        1000 good samples
        min = -3.2995e+00, max = 3.3592e+00, mean = -3.8647e-02, rms = 1.0071e+00
    det 1A:
      pntg [6.999e-01 1.910e-18 7.142e-01 1.102e-16] --> [-6.999e-01 2.199e-03 -7.142e-01 -2.244e-03]
      2.246e-01 (0) --> 1.072e-01 (0)
        1000 good samples
        min = -3.0817e+00, max = 2.8059e+00, mean = 3.5820e-04, rms = 1.0081e+00
    det 1B:
      pntg [4.949e-01 -4.949e-01 5.050e-01 -5.050e-01] --> [-4.934e-01 4.965e-01 -5.066e-01 5.034e-01]
      -4.524e-01 (0) --> -2.024e+00 (0)
        1000 good samples
        min = -3.5640e+00, max = 3.0530e+00, mean = 1.0675e-02, rms = 9.6855e-01
    det 2A:
      pntg [6.476e-01 -2.749e-01 6.589e-01 -2.662e-01] --> [-6.467e-01 2.770e-01 -6.597e-01 2.642e-01]
      -1.843e+00 (0) --> 3.087e-01 (0)
        1000 good samples
        min = -3.3445e+00, max = 2.9321e+00, mean = -3.8562e-02, rms = 9.8488e-01
    det 2B:
      pntg [2.635e-01 -6.523e-01 2.776e-01 -6.542e-01] --> [-2.615e-01 6.531e-01 -2.797e-01 6.533e-01]
      7.663e-01 (0) --> -3.442e-01 (0)
        1000 good samples
        min = -3.3714e+00, max = 2.9475e+00, mean = -2.0800e-02, rms = 1.0236e+00
    det 3A:
      pntg [7.106e-01 -6.171e-03 7.035e-01 6.171e-03] --> [-7.106e-01 8.403e-03 -7.035e-01 -8.381e-03]
      -1.756e-01 (0) --> 5.364e-01 (0)
        1000 good samples
        min = -2.9836e+00, max = 3.2346e+00, mean = 2.7732e-02, rms = 9.8296e-01
    det 3B:
      pntg [4.981e-01 -5.069e-01 5.018e-01 -4.931e-01] --> [-4.965e-01 5.084e-01 -5.034e-01 4.915e-01]
      -6.060e-02 (0) --> 1.605e-01 (0)
        1000 good samples
        min = -3.0320e+00, max = 4.3120e+00, mean = -1.7300e-02, rms = 1.0184e+00
    det 4A:
      pntg [7.142e-01 -1.991e-18 6.999e-01 1.119e-16] --> [-7.142e-01 2.244e-03 -6.999e-01 -2.199e-03]
      -1.548e+00 (0) --> 1.508e+00 (0)
        1000 good samples
        min = -2.8804e+00, max = 3.7006e+00, mean = 7.1076e-02, rms = 9.6148e-01
    det 4B:
      pntg [5.050e-01 -5.050e-01 4.949e-01 -4.949e-01] --> [-5.034e-01 5.066e-01 -4.965e-01 4.934e-01]
      -3.538e-01 (0) --> -7.723e-02 (0)
        1000 good samples
        min = -2.6566e+00, max = 3.6116e+00, mean = 8.8672e-03, rms = 9.8317e-01
    det 5A:
      pntg [7.106e-01 6.171e-03 7.035e-01 -6.171e-03] --> [-7.106e-01 -3.938e-03 -7.035e-01 3.960e-03]
      7.894e-01 (0) --> 1.027e+00 (0)
        1000 good samples
        min = -2.9491e+00, max = 2.8820e+00, mean = 8.0623e-03, rms = 9.9094e-01
    det 5B:
      pntg [5.069e-01 -4.981e-01 4.931e-01 -5.018e-01] --> [-5.053e-01 4.997e-01 -4.947e-01 5.003e-01]
      1.479e-01 (0) --> -3.724e-02 (0)
        1000 good samples
        min = -3.1439e+00, max = 2.7848e+00, mean = -9.1418e-03, rms = 9.9075e-01
    det 6A:
      pntg [6.523e-01 -2.635e-01 6.542e-01 -2.776e-01] --> [-6.515e-01 2.656e-01 -6.550e-01 2.756e-01]
      4.832e-01 (0) --> 6.188e-01 (0)
        1000 good samples
        min = -3.1286e+00, max = 3.4804e+00, mean = 7.6529e-03, rms = 9.9506e-01
    det 6B:
      pntg [2.749e-01 -6.476e-01 2.662e-01 -6.589e-01] --> [-2.729e-01 6.485e-01 -2.683e-01 6.581e-01]
      -4.357e-01 (0) --> 6.479e-01 (0)
        1000 good samples
        min = -2.8760e+00, max = 2.7913e+00, mean = 1.4102e-02, rms = 9.9937e-01
    cache BORE_QUAT:
        min = -7.0711e-01, max = 7.0711e-01, mean = 3.5355e-04, rms = 5.0000e-01
    cache common_flags:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_0A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_0B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_1A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_1B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_2A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_2B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_3A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_3B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_4A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_4B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_5A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_5B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_6A:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache flags_6B:
        min = 0.0000e+00, max = 0.0000e+00, mean = 0.0000e+00, rms = 0.0000e+00
    cache quat_0A:
        min = -7.0711e-01, max = 7.0711e-01, mean = 1.9134e-04, rms = 5.0000e-01
    cache quat_0B:
        min = -7.0711e-01, max = 7.0711e-01, mean = -1.9134e-04, rms = 5.0000e-01
    cache quat_1A:
        min = -7.1420e-01, max = 7.1420e-01, mean = -1.9145e-03, rms = 5.0000e-01
    cache quat_1B:
        min = -7.1420e-01, max = 6.9995e-01, mean = -3.2074e-03, rms = 4.9999e-01
    cache quat_2A:
        min = -7.1066e-01, max = 7.0354e-01, mean = -2.2732e-04, rms = 5.0000e-01
    cache quat_2B:
        min = -7.1066e-01, max = 7.0354e-01, mean = -2.7360e-03, rms = 4.9999e-01
    cache quat_3A:
        min = -7.1061e-01, max = 7.1066e-01, mean = 3.4517e-03, rms = 4.9999e-01
    cache quat_3B:
        min = -7.0354e-01, max = 7.1066e-01, mean = 1.6037e-03, rms = 5.0000e-01
    cache quat_4A:
        min = -7.1419e-01, max = 7.1420e-01, mean = 2.6215e-03, rms = 4.9999e-01
    cache quat_4B:
        min = -6.9995e-01, max = 7.1420e-01, mean = 3.2074e-03, rms = 4.9999e-01
    cache quat_5A:
        min = -7.1066e-01, max = 7.1066e-01, mean = -4.7661e-04, rms = 5.0000e-01
    cache quat_5B:
        min = -7.0354e-01, max = 7.1066e-01, mean = 1.6037e-03, rms = 5.0000e-01
    cache quat_6A:
        min = -7.1066e-01, max = 7.0354e-01, mean = -2.3533e-03, rms = 4.9999e-01
    cache quat_6B:
        min = -7.1066e-01, max = 7.0354e-01, mean = -6.0999e-04, rms = 5.0000e-01
    cache signal_0A:
        min = -2.8418e+00, max = 2.8013e+00, mean = 4.9750e-03, rms = 9.6425e-01
    cache signal_0B:
        min = -3.2995e+00, max = 3.3592e+00, mean = -3.8647e-02, rms = 1.0071e+00
    cache signal_1A:
        min = -3.0817e+00, max = 2.8059e+00, mean = 3.5820e-04, rms = 1.0081e+00
    cache signal_1B:
        min = -3.5640e+00, max = 3.0530e+00, mean = 1.0675e-02, rms = 9.6855e-01
    cache signal_2A:
        min = -3.3445e+00, max = 2.9321e+00, mean = -3.8562e-02, rms = 9.8488e-01
    cache signal_2B:
        min = -3.3714e+00, max = 2.9475e+00, mean = -2.0800e-02, rms = 1.0236e+00
    cache signal_3A:
        min = -2.9836e+00, max = 3.2346e+00, mean = 2.7732e-02, rms = 9.8296e-01
    cache signal_3B:
        min = -3.0320e+00, max = 4.3120e+00, mean = -1.7300e-02, rms = 1.0184e+00
    cache signal_4A:
        min = -2.8804e+00, max = 3.7006e+00, mean = 7.1076e-02, rms = 9.6148e-01
    cache signal_4B:
        min = -2.6566e+00, max = 3.6116e+00, mean = 8.8672e-03, rms = 9.8317e-01
    cache signal_5A:
        min = -2.9491e+00, max = 2.8820e+00, mean = 8.0623e-03, rms = 9.9094e-01
    cache signal_5B:
        min = -3.1439e+00, max = 2.7848e+00, mean = -9.1418e-03, rms = 9.9075e-01
    cache signal_6A:
        min = -3.1286e+00, max = 3.4804e+00, mean = 7.6529e-03, rms = 9.9506e-01
    cache signal_6B:
        min = -2.8760e+00, max = 2.7913e+00, mean = 1.4102e-02, rms = 9.9937e-01
    cache timestamps:
        min = 0.0000e+00, max = 4.9950e+01, mean = 2.4975e+01, rms = 1.4434e+01

Utilities

There are many utilities in the TOAST package that use compiled code internally. These include:

  • toast.rng: Streamed random number generation, with support for generating random samples from any location within a stream.

  • toast.qarray: Vectorized quaternion operations.

  • toast.fft: API Wrapper around different vendor FFT packages.

  • toast.cache: Class for dictionary of C-allocated numpy arrays.

  • toast.healpix: Subset of pixel projection routines, simd vectorized and threaded.

  • toast.timing: Simple serial timers, global named timers per process, a decorator to time calls to functions, and MPI tools to gather timing statistics from multiple processes.

Random Number Example

Here is a quick example of a threaded generation of random numbers drawn from a unit-variance gaussian distribution. Note the "key" pair of uint64 values and the first value of the "counter" pair determine the stream, and the second value of the counter pair is effectively the sample in that stream. We can drawn randoms from anywhere in the stream in a reproducible fashion (i.e. this random generator is stateless). Under the hood, this uses the Random123 package on each thread.

In [24]:
import toast.rng as rng

# Number of random samples
nrng = 10
In [25]:
# Draw randoms from the beginning of a stream
rng1 = rng.random(
    nrng, key=[12, 34], counter=[56, 0], sampler="gaussian", threads=True
)
In [26]:
# Draw randoms from some later starting point in the stream
rng2 = rng.random(
    nrng, key=[12, 34], counter=[56, 4], sampler="gaussian", threads=True
)
In [27]:
# The returned objects are buffer providers, so can be used like a numpy array.
print("Returned RNG buffers:")
print(rng1)
print(rng2)
Returned RNG buffers:
<AlignedF64 10 elements: -1.93456 0.0142729 ... 1.00982 0.190195>
<AlignedF64 10 elements: -0.661366 0.0715733 ... -0.696467 1.31699>
In [28]:
# Compare the elements.  Note how the overlapping sample indices match.  The
# randoms drawn for any given sample agree regardless of the starting sample.
print("------ rng1 ------")
for i in range(nrng):
    print("rng1 {}:  {}".format(i, rng1[i]))
print("------ rng2 ------")
for i in range(nrng):
    print("rng2 {}:  {}".format(i + 4, rng2[i]))
------ rng1 ------
rng1 0:  -1.9345557154123538
rng1 1:  0.014272943598316932
rng1 2:  0.45226011897093527
rng1 3:  -1.957844901847933
rng1 4:  -0.6613662026904461
rng1 5:  0.07157330335682659
rng1 6:  -0.6785937530814287
rng1 7:  1.1896892565821966
rng1 8:  1.0098155098534918
rng1 9:  0.1901951884823849
------ rng2 ------
rng2 4:  -0.6613662026904461
rng2 5:  0.07157330335682659
rng2 6:  -0.6785937530814287
rng2 7:  1.1896892565821966
rng2 8:  1.0098155098534918
rng2 9:  0.1901951884823849
rng2 10:  0.04850349309129753
rng2 11:  0.6481059989834956
rng2 12:  -0.6964671519913512
rng2 13:  1.3169924602685523

Quaternion Array Example

The quaternion manipulation functions internally attempt to improve performance using OpenMP SIMD directives and threading in cases where it makes sense. The Python API is modelled after the quaternionarray package (https://github.com/zonca/quaternionarray/). There are functions for common operations like multiplying quaternion arrays, rotating arrays of vectors, converting to and from angle representations, SLERP, etc.

In [29]:
import toast.qarray as qa

# Number points for this example

nqa = 5
In [30]:
# Make some fake rotation data by sweeping through theta / phi / pa angles

theta = np.linspace(0.0, np.pi, num=nqa)
phi = np.linspace(0.0, 2 * np.pi, num=nqa)
pa = np.zeros(nqa)
print("----- input angles -----")
print("theta = ", theta)
print("phi = ", phi)
print("pa = ", pa)
----- input angles -----
theta =  [0.         0.78539816 1.57079633 2.35619449 3.14159265]
phi =  [0.         1.57079633 3.14159265 4.71238898 6.28318531]
pa =  [0. 0. 0. 0. 0.]
In [31]:
# Convert to quaternions

quat = qa.from_angles(theta, phi, pa)

print("\n----- output quaternions -----")
print(quat)
----- output quaternions -----
[[ 0.00000000e+00  0.00000000e+00  1.00000000e+00  2.22044605e-16]
 [ 2.70598050e-01  2.70598050e-01  6.53281482e-01 -6.53281482e-01]
 [ 0.00000000e+00  7.07106781e-01  1.11022302e-16 -7.07106781e-01]
 [-6.53281482e-01  6.53281482e-01 -2.70598050e-01 -2.70598050e-01]
 [-1.00000000e+00  1.11022302e-16 -6.12323400e-17 -1.84889275e-32]]
In [32]:
# Use these to rotate a vector

zaxis = np.array([0.0, 0.0, 1.0])
zrot = qa.rotate(quat, zaxis)

print("\n---- Z-axis rotated by quaternions ----")
print(zrot)
---- Z-axis rotated by quaternions ----
[[ 0.00000000e+00  0.00000000e+00  1.00000000e+00]
 [ 0.00000000e+00  7.07106781e-01  7.07106781e-01]
 [-1.00000000e+00  1.57009246e-16  2.22044605e-16]
 [-2.77555756e-16 -7.07106781e-01 -7.07106781e-01]
 [ 1.22464680e-16 -5.05741657e-32 -1.00000000e+00]]
In [33]:
# Rotate different vector by each quaternion

zout = qa.rotate(quat, zrot)

print("\n---- Arbitrary vectors rotated by quaternions ----")
print(zout)
---- Arbitrary vectors rotated by quaternions ----
[[ 0.00000000e+00  0.00000000e+00  1.00000000e+00]
 [ 7.07106781e-01  5.00000000e-01  5.00000000e-01]
 [-4.44089210e-16  3.14018492e-16 -1.00000000e+00]
 [ 7.07106781e-01  5.00000000e-01  5.00000000e-01]
 [ 0.00000000e+00  7.39557099e-32  1.00000000e+00]]
In [34]:
# Multiply two quaternion arrays

qcopy = np.array(quat)
qout = qa.mult(quat, qcopy)

print("\n---- Product of two quaternion arrays ----")
print(qout)
---- Product of two quaternion arrays ----
[[ 0.00000000e+00  0.00000000e+00  4.44089210e-16 -1.00000000e+00]
 [-3.53553391e-01 -3.53553391e-01 -8.53553391e-01 -1.46446609e-01]
 [ 0.00000000e+00 -1.00000000e+00 -1.57009246e-16  2.22044605e-16]
 [ 3.53553391e-01 -3.53553391e-01  1.46446609e-01 -8.53553391e-01]
 [ 3.69778549e-32 -2.05268330e-48  2.26424058e-48 -1.00000000e+00]]
In [35]:
# SLERP quaternions

qtime = 3.0 * np.arange(nqa)
qtargettime = np.arange(3.0 * (nqa - 1) + 1)
qslerped = qa.slerp(qtargettime, qtime, quat)

print("\n---- SLERP input ----")
for t, q in zip(qtime, quat):
    print("t = {} : {}".format(t, q))
    
print("\n---- SLERP output ----")
for t, q in zip(qtargettime, qslerped):
    print("t = {} : {}".format(t, q))
---- SLERP input ----
t = 0.0 : [0.00000000e+00 0.00000000e+00 1.00000000e+00 2.22044605e-16]
t = 3.0 : [ 0.27059805  0.27059805  0.65328148 -0.65328148]
t = 6.0 : [ 0.00000000e+00  7.07106781e-01  1.11022302e-16 -7.07106781e-01]
t = 9.0 : [-0.65328148  0.65328148 -0.27059805 -0.27059805]
t = 12.0 : [-1.00000000e+00  1.11022302e-16 -6.12323400e-17 -1.84889275e-32]

---- SLERP output ----
t = 0.0 : [0.00000000e+00 0.00000000e+00 1.00000000e+00 2.22044605e-16]
t = 1.0 : [ 0.10093174  0.10093174  0.95929668 -0.24367078]
t = 2.0 : [ 0.19364697  0.19364697  0.84050023 -0.46750515]
t = 3.0 : [ 0.27059805  0.27059805  0.65328148 -0.65328148]
t = 4.0 : [ 0.19364697  0.45739433  0.46750515 -0.7312525 ]
t = 5.0 : [ 0.10093174  0.60695567  0.24367078 -0.74969471]
t = 6.0 : [ 0.00000000e+00  7.07106781e-01  1.11022302e-16 -7.07106781e-01]
t = 7.0 : [-0.24367078  0.74969471 -0.10093174 -0.60695567]
t = 8.0 : [-0.46750515  0.7312525  -0.19364697 -0.45739433]
t = 9.0 : [-0.65328148  0.65328148 -0.27059805 -0.27059805]
t = 10.0 : [-0.84050023  0.46750515 -0.19364697 -0.19364697]
t = 11.0 : [-0.95929668  0.24367078 -0.10093174 -0.10093174]
t = 12.0 : [-1.00000000e+00  1.11022302e-16 -6.12323400e-17 -1.84889275e-32]

FFT Example

The internal FFT functions in TOAST are very limited and focus only on batched 1D Real FFTs. These are used for simulated noise generation and timestream filtering. Internally the compiled code can use either FFTW or MKL for the backend calculation.

In [36]:
# Number of batched FFTs

nbatch = 5

# FFT length

nfft = 65536
In [37]:
# Create some fake data

infft = np.zeros((nbatch, nfft), dtype=np.float64)
for b in range(nbatch):
    infft[b, :] = rng.random(nfft, key=[0, 0], counter=[b, 0], sampler="gaussian")

print("----- FFT input -----")
print(infft)
----- FFT input -----
[[ 0.70824414  0.37958204 -0.62191667 ...  0.77954278  1.12705665
   0.21430745]
 [ 0.61372834  0.68541016 -0.82076732 ... -0.10495521 -0.09820802
   0.1019775 ]
 [-0.25952652 -0.41435533  0.57094265 ...  0.32334547  2.15186219
  -0.10320734]
 [-0.18910596 -1.50802991 -1.65130707 ...  0.42610591  1.17170788
   1.54472861]
 [-1.4962988  -0.00235431 -1.50373344 ...  0.97027549  0.94612694
   0.80572723]]
In [38]:
# Forward FFT

outfft = toast.fft.r1d_forward(infft)

print("\n----- FFT output -----")
print(outfft)
----- FFT output -----
[[ 428.42060388 -138.03674787 -181.0470772  ... -403.91871131
   -80.09506675 -210.21894707]
 [-236.80183938 -293.36911153 -299.89827436 ... -265.36475328
    79.30045561 -152.79215102]
 [ 407.62843731 -321.18213065  149.8744358  ...  115.39517
   161.42894263   -8.53315689]
 [-353.92938439 -163.58217755  199.29345241 ...   41.98100849
  -151.62213508  227.94024389]
 [-287.73541094  -94.54962966  -41.95149462 ...  121.73873742
   200.54675492  233.86103108]]
In [39]:
# Reverse FFT

backfft = toast.fft.r1d_backward(outfft)

print("\n----- FFT inverse output -----")
print(backfft)
----- FFT inverse output -----
[[ 0.70824414  0.37958204 -0.62191667 ...  0.77954278  1.12705665
   0.21430745]
 [ 0.61372834  0.68541016 -0.82076732 ... -0.10495521 -0.09820802
   0.1019775 ]
 [-0.25952652 -0.41435533  0.57094265 ...  0.32334547  2.15186219
  -0.10320734]
 [-0.18910596 -1.50802991 -1.65130707 ...  0.42610591  1.17170788
   1.54472861]
 [-1.4962988  -0.00235431 -1.50373344 ...  0.97027549  0.94612694
   0.80572723]]

Cache Example

The Cache class provides a mechanism to work around the Python memory pool. There are times when we want to allocate memory and explicitly free it without waiting for garbage collection. Every instance of a toast.Cache acts as a dictionary of numpy arrays. Internally, the memory of each entry is a flat-packed std::vector with a custom allocator that ensures aligned memory allocation. Aligned memory is required for SIMD operations both in TOAST and in external libraries. Buffers in a Cache instance can be used directly for such operations.

In [40]:
from toast.cache import Cache

# Example array dimensions

cnames = ["c1", "c2"]
cshapes = {
    "c1" : (20,),
    "c2" : (2, 3, 2)
}
ctyps = {
    "c1" : np.float64,
    "c2" : np.uint16
}
In [41]:
# A cache instance

cache = Cache()
In [42]:
# Create some empty arrays in the cache

for cn in cnames:
    cache.create(cn, ctyps[cn], cshapes[cn])

print("---- Cache object ----")
print(cache)
print("\n---- Now contains ----")
for cn in cnames:
    print("{}:  {}".format(cn, cache.reference(cn)))
print("Size = ", cache.report(silent=True), " bytes")
---- Cache object ----
<toast.cache.Cache object at 0x7f8a192ce210>

---- Now contains ----
c1:  [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
c2:  [[[0 0]
  [0 0]
  [0 0]]

 [[0 0]
  [0 0]
  [0 0]]]
Size =  184  bytes
In [43]:
# Fill existing buffers

# Get a reference to the buffer
cdata = cache.reference("c1")

# Assign elements.
cdata[:] = np.random.random(cshapes["c1"])

# Delete the reference
del cdata
In [44]:
cdata = cache.reference("c2")
idx = 0
for x in range(cshapes["c2"][0]):
    for y in range(cshapes["c2"][1]):
        for z in range(cshapes["c2"][2]):
            cdata[x, y, z] = idx
            idx += 1
del cdata
    
print("\n---- Contents after filling ----")
for cn in cnames:
    print("{}:  {}".format(cn, cache.reference(cn)))
print("Size = ", cache.report(silent=True), " bytes")
---- Contents after filling ----
c1:  [0.44328166 0.80795194 0.37464736 0.96059592 0.5580428  0.24322618
 0.4750618  0.26017303 0.43853396 0.99500193 0.64116086 0.16590164
 0.82387159 0.60791233 0.72099999 0.21817445 0.8606358  0.70778716
 0.63149889 0.6105775 ]
c2:  [[[ 0  1]
  [ 2  3]
  [ 4  5]]

 [[ 6  7]
  [ 8  9]
  [10 11]]]
Size =  184  bytes
In [45]:
# We can also "put" existing numpy arrays which will then be copied into
# the cache

np1 = np.random.normal(size=10)
np2 = np.random.randint(0, high=255, dtype=np.uint16, size=12).reshape((2, 3, 2))

cache.put("p1", np1)
cache.put("p2", np2)

print("\n---- Contents after putting numpy arrays ----")

for cn in list(cache.keys()):
    print("{}:  {}".format(cn, cache.reference(cn)))
print("Size = ", cache.report(silent=True), " bytes")
---- Contents after putting numpy arrays ----
c1:  [0.44328166 0.80795194 0.37464736 0.96059592 0.5580428  0.24322618
 0.4750618  0.26017303 0.43853396 0.99500193 0.64116086 0.16590164
 0.82387159 0.60791233 0.72099999 0.21817445 0.8606358  0.70778716
 0.63149889 0.6105775 ]
c2:  [[[ 0  1]
  [ 2  3]
  [ 4  5]]

 [[ 6  7]
  [ 8  9]
  [10 11]]]
p1:  [-0.98691197 -0.62132374 -0.83340015 -0.41374233 -1.08316953  0.20401303
 -1.76006368 -1.6032611   0.05804535 -1.30782797]
p2:  [[[250 250]
  [ 52  52]
  [ 15 156]]

 [[  5 163]
  [ 84 117]
  [137  62]]]
Size =  288  bytes

Running the Test Suite

TOAST includes extensive tests built in to the package. Running all of them takes some time, but you can also run just one test by specifying the name of the file in the toast/tests directory (without the ".py" extension):

In [46]:
import toast.tests

# Run just a couple simple tests in toast/tests/env.py
toast.tests.run("env")
test_comm (toast.tests.env.EnvTest) ... 
<toast.Comm
  World MPI communicator = <mpi4py.MPI.Intracomm object at 0x7f8a5e9d8bb0>
  World MPI size = 1
  World MPI rank = 0
  Group MPI communicator = <mpi4py.MPI.Intracomm object at 0x7f8a5e9d8bb0>
  Group MPI size = 1
  Group MPI rank = 0
  Rank MPI communicator = <mpi4py.MPI.Intracomm object at 0x7f8a5e9d8b90>
>
[0]ok 
test_env (toast.tests.env.EnvTest) ... 
<toast.Environment
  Source code version = 2.3.5.dev1474
  Logging level = INFO
  Handling enabled for 0 signals:
  Max threads = 12
  MPI build enabled
  MPI runtime enabled
>
[0]ok 

----------------------------------------------------------------------
Ran 2 tests in 0.003s

[0] OK
Out[46]:
0
In [47]:
# Now run **ALL** the (serial) tests
# toast.tests.run()
In [ ]: