Evaluate the Kerr likelihood, fits using poisson, gaussian

The Kerr likelihood formula

For each cell with a set of photons with weights $w$, the log likelihood as a function of $\alpha$ and $\beta$ is

$$ \displaystyle\log\mathcal{L}(\alpha,\beta\ |\ w)\ = \sum_{w} \log \big( 1 + \alpha\ w + \beta\ (1-w) \big) - (\alpha\ S + \beta\ B) $$

where $\alpha$ and $\beta$ are the excess signal and background fractions and $S$ and $B$ are the expected numbers of signal and background counts for the cell, determined from the full data set and relative exposure for the cell.

Usually, we assume $\beta=0$, that the background is not varying, and look for signal variation.

In the special case where all $w$ values are 1, this reduces to Poisson likelihood, with solution $\alpha = (1-n/S)$, where $n$ is the number of photons.

This module uses this functional evaluation of the likelihood for a cell to define the poisson-like 3-parameter function to approximate this likelihood.

The case of a known variable background is not equivalent to allowing $\beta$ to vary, although detecting that would be a reason to be concerned. That is because the rest of the background would not be varying. This case could be dealt with by allowing for two sources.

Special case: expand the log

The case where the variation of the signal is small, and background dominated, deserves attention. Suppose $\max(|\alpha w|) <<1$ and $\beta$ is also small.

Then we only need the sums $\sum w$ and $\sum w^2$. Denoting them as $W$ and $U$, the solution is

$$ \alpha = (W-S)\ /U\ \pm 1/ \sqrt U $$

However, for a source that is mostly quiescent, with brief flares, we have $\alpha=-1+\epsilon$, with $0 < \epsilon$ <<1$.

class LogLike[source]

LogLike(cell)

implement Kerr Eqn 2 for a single interval, or cell

  • cell -- a dict with w, n, S, B

Example

Create a cell for further tests. Plot the likelihood for a cell with 100 weights

# def fitit(loglike, pars):
#     pars=np.atleast_1d(pars)
#     fun = lambda p:  -loglike(p)
#     grad = lambda p: np.atleast_1d(loglike.gradient(p))
#     print(f'Estimate: {pars.round(4)},  function: {fun(pars):.3f}, gradient: {(np.array(grad(pars))).round(4)}')
#     fi = optimize.fmin_l_bfgs_b(fun, pars, fprime=grad)
#     fit_pars = np.array(fi[0])
#     print(f'Fit:      {fit_pars.round(4)}, function {fi[1]:.3f}, gradient: {(fi[2]["grad"].round(4))}' )
#     return fi[2]

# loglike = ll
# fun = lambda p:  -loglike(p)
# grad = lambda p: -np.atleast_1d(loglike.gradient(p))
# def check(pars):
#     pars = np.atleast_1d(pars)
#     print(f'pars: {pars.round(4)},  function: {fun(pars):.3f}, gradient: {(np.array(grad(pars))).round(4)}')
    
# check([1,0]), check([1.01,0]), check([0.99,0]);

# fitit(ll, [1.,0])

# fitit(ll, [0.99, 0])

# pars=[1,0.1]
# ll.gradient(pars)

class GaussianRep[source]

GaussianRep(loglike, fix_beta=True)

Manage fits to the loglike object

The 1-D gaussian fit parameters

If the rate is significant, more than 5 $\sigma$, the resulting poisson-like parameters are straight-forward to fit.

If $s$ and $v$ are the signal rate and its variance, determined by the 1-d Gaussian fit for $n$ measurements, so $s > 5\sqrt{v} $, then we determine the poisson-like parameters as follows.

We use the properties of the Poisson distribution $f(n;\lambda) = \exp(n \log\lambda - \lambda + \mathrm{const})$, where the parameter $\lambda$ is equal to the expected value of number of occurrences $n$ and to its variance, and that the function we want is shifted by the background $b$ and scaled by a factor $k$ so we use $f(k(s-b); \lambda)$ This implies that for the expected value of $s$, $\lambda = n$, and $ k(s-b)= k^2 v = n$.

from which we obtain $k=\sqrt{n}/v$ and $b=s-k/n$.

2-D Gaussian fit

class Gaussian2dRep[source]

Gaussian2dRep(ll) :: GaussianRep

Manage fits to the loglike object

n = 1000
s = 0.7
b = 0.4
w = np.hstack( [np.full(int(n*s), 1, np.float32), np.full(int(n*b), 0, np.float32)])
cell = dict( n=n, w=w, S=n*s, B=n*b)

ll2 = LogLike(cell)
print( Gaussian2dRep(ll2))
Gaussian2dRep: 1,100 counts
	flux: 1.000 +/- 0.038 
	beta: 0.000 +/- 0.050

class PoissonRep[source]

PoissonRep(loglike, tol=0.2, ts_min=25)

Manage the representation of the log likelihood of a cell by a Poisson.

  • loglike: a LogLike object
  • ts_min : tolerance for if use fitter
  • thresh: sigma threshold to assume no Bayesian restriction

Constructor takes a LogLike object, fits it to the poisson-like function (see Poisson), and defines a function to evaluate that.

Note that beta is set to zero.

If the rate is significant, more than 5 $\sigma$, so that the likelihood is not truncated by the Bayesian requirement, the resulting poisson-like parameters are straightforward to determine.

If $s$ and $v$ are the signal rate and its variance, determined by the 1-d Gaussian fit for $n$ measurements, so $s > 5\sqrt{v} $, then we determine the poisson-like parameters as follows.

We use the properties of the Poisson distribution $f(n;\lambda) = \exp(n \log\lambda - \lambda + \mathrm{const})$, where the parameter $\lambda$ is equal to the expected value of number of occurrences $n$ and to its variance, and that the function we want is shifted by the background $b$ and scaled by a factor $k$ so we use $f(k(s-b); \lambda)$ This implies that for the expected value of $s$, $\lambda = n$, and $ k(s-b)= k^2 v = n$.

Type Default Details
loglike No Content
tol float 2 note global
ts_min int 25 No Content
# plt.plot(x,y);

# pr = PoissonRep(fail, tol=0.4);
# plt.plot(x, [pr(t) for t in x]);
#     pars = np.atleast_1d(pars)
#     if len(pars)>1:      alpha, beta = pars - np.array([1,0])
#     else:                alpha, beta = max(-1, pars[0]-1), 0
#     tmp =  1 + alpha*self.w + beta*(1-self.w)
#     # limit alpha
#     tmp[tmp<=1e-6]=1e-6

#     return np.sum( np.log(tmp)) - alpha*self.S - beta*self.B
# test(ll, 1), test(ll, [1,0.5])

Poisson representation development

# info = ll.fit_info()
# print(f'LogLike fit info:{info}')
# pr = PoissonRep(ll, ts_min=15)
# print(f'PoissonRep: {pr}, poiss: {pr.poiss}')
# pars = np.array(pr.poiss.p).round(3)
# print(f'poisson pars: {pars}')
# print(f'poisson info:\n{pr.info()} ')

# n, m, sig = np.array(list(info.values())).round(3)
# print(f'Get poisson parmeters from n={int(n)}, m={m}, sig={sig}')
# mu, beta = n, n-np.sqrt(n)/sig
# e = m*(mu-beta)
# b = beta/e
# pois = Poisson([m, e , b])
# print(pois, pois.p)
# pr.comparison_plots(xlim=(0.4, 1.6), ylabel='log likelihood' ,
#                     title='Likelihood Functional representations')

class PoissonRepTable[source]

PoissonRepTable(loglike) :: PoissonRep

Create a table, then interpolate it