The Kerr likelihood formula
For each cell with a set of photons with weights $w$, the log likelihood as a function of $\alpha$ and $\beta$ is
$$ \displaystyle\log\mathcal{L}(\alpha,\beta\ |\ w)\ = \sum_{w} \log \big( 1 + \alpha\ w + \beta\ (1-w) \big) - (\alpha\ S + \beta\ B) $$
where $\alpha$ and $\beta$ are the excess signal and background fractions and $S$ and $B$ are the expected numbers of signal and background counts for the cell, determined from the full data set and relative exposure for the cell.
Usually, we assume $\beta=0$, that the background is not varying, and look for signal variation.
In the special case where all $w$ values are 1, this reduces to Poisson likelihood, with solution $\alpha = (1-n/S)$, where $n$ is the number of photons.
This module uses this functional evaluation of the likelihood for a cell to define the poisson-like 3-parameter function to approximate this likelihood.
The case of a known variable background is not equivalent to allowing $\beta$ to vary, although detecting that would be a reason to be concerned. That is because the rest of the background would not be varying. This case could be dealt with by allowing for two sources.
Special case: expand the log
The case where the variation of the signal is small, and background dominated, deserves attention. Suppose $\max(|\alpha w|) <<1$ and $\beta$ is also small.
Then we only need the sums $\sum w$ and $\sum w^2$. Denoting them as $W$ and $U$, the solution is
$$ \alpha = (W-S)\ /U\ \pm 1/ \sqrt U $$
However, for a source that is mostly quiescent, with brief flares, we have $\alpha=-1+\epsilon$, with $0 < \epsilon$ <<1$.
# def fitit(loglike, pars):
# pars=np.atleast_1d(pars)
# fun = lambda p: -loglike(p)
# grad = lambda p: np.atleast_1d(loglike.gradient(p))
# print(f'Estimate: {pars.round(4)}, function: {fun(pars):.3f}, gradient: {(np.array(grad(pars))).round(4)}')
# fi = optimize.fmin_l_bfgs_b(fun, pars, fprime=grad)
# fit_pars = np.array(fi[0])
# print(f'Fit: {fit_pars.round(4)}, function {fi[1]:.3f}, gradient: {(fi[2]["grad"].round(4))}' )
# return fi[2]
# loglike = ll
# fun = lambda p: -loglike(p)
# grad = lambda p: -np.atleast_1d(loglike.gradient(p))
# def check(pars):
# pars = np.atleast_1d(pars)
# print(f'pars: {pars.round(4)}, function: {fun(pars):.3f}, gradient: {(np.array(grad(pars))).round(4)}')
# check([1,0]), check([1.01,0]), check([0.99,0]);
# fitit(ll, [1.,0])
# fitit(ll, [0.99, 0])
# pars=[1,0.1]
# ll.gradient(pars)
If the rate is significant, more than 5 $\sigma$, the resulting poisson-like parameters are straight-forward to fit.
If $s$ and $v$ are the signal rate and its variance, determined by the 1-d Gaussian fit for $n$ measurements, so $s > 5\sqrt{v} $, then we determine the poisson-like parameters as follows.
We use the properties of the Poisson distribution $f(n;\lambda) = \exp(n \log\lambda - \lambda + \mathrm{const})$, where the parameter $\lambda$ is equal to the expected value of number of occurrences $n$ and to its variance, and that the function we want is shifted by the background $b$ and scaled by a factor $k$ so we use $f(k(s-b); \lambda)$ This implies that for the expected value of $s$, $\lambda = n$, and $ k(s-b)= k^2 v = n$.
from which we obtain $k=\sqrt{n}/v$ and $b=s-k/n$.
n = 1000
s = 0.7
b = 0.4
w = np.hstack( [np.full(int(n*s), 1, np.float32), np.full(int(n*b), 0, np.float32)])
cell = dict( n=n, w=w, S=n*s, B=n*b)
ll2 = LogLike(cell)
print( Gaussian2dRep(ll2))
# plt.plot(x,y);
# pr = PoissonRep(fail, tol=0.4);
# plt.plot(x, [pr(t) for t in x]);
# pars = np.atleast_1d(pars)
# if len(pars)>1: alpha, beta = pars - np.array([1,0])
# else: alpha, beta = max(-1, pars[0]-1), 0
# tmp = 1 + alpha*self.w + beta*(1-self.w)
# # limit alpha
# tmp[tmp<=1e-6]=1e-6
# return np.sum( np.log(tmp)) - alpha*self.S - beta*self.B
# test(ll, 1), test(ll, [1,0.5])
# info = ll.fit_info()
# print(f'LogLike fit info:{info}')
# pr = PoissonRep(ll, ts_min=15)
# print(f'PoissonRep: {pr}, poiss: {pr.poiss}')
# pars = np.array(pr.poiss.p).round(3)
# print(f'poisson pars: {pars}')
# print(f'poisson info:\n{pr.info()} ')
# n, m, sig = np.array(list(info.values())).round(3)
# print(f'Get poisson parmeters from n={int(n)}, m={m}, sig={sig}')
# mu, beta = n, n-np.sqrt(n)/sig
# e = m*(mu-beta)
# b = beta/e
# pois = Poisson([m, e , b])
# print(pois, pois.p)
# pr.comparison_plots(xlim=(0.4, 1.6), ylabel='log likelihood' ,
# title='Likelihood Functional representations')