# “Baroque possibilities” for constructing SEV in sequential trials

I mentioned in my last post that I had thought up some baroque possibilities for candidate SEV functions in sequential trials. I figure it’s worth writing out exactly what I mean by that. Before I begin, though, I will direct readers to the first 26 slides of this slide deck, especially the plot on slide 6.

As discussed in the slide deck, when you’re designing a two-sided confidence interval procedure you have some freedom to decide, for each value of the true parameter, how much probability mass you will put above (literally above if we’re talking about the plot on slide 6) the upper limit and how much you’ll put below the lower limit. The confidence coverage property only constrains the sum of these two chunks of probability mass.

The kinds of inferences for which the SEV function was designed are one-sided, directional inferences — upper-bounding inferences and lower-bounding inferences — so there’s no arbitrariness to SEV with well-behaved models in fixed sample size designs. Sequential designs introduce multiple thresholds at which alpha can be “spent” so even just for a simple one-sided test there is already an element of arbitrariness that must be eliminated by recourse to an alpha-spending function or an expected sample size minimization or some other principle that eats up the degrees of freedom left over after imposing the Type I error constraint. There is likewise arbitrariness in specifying a one-sided confidence procedure for sequential trials — as with two-sided intervals there are multiple boundaries to specify at each possible parameter value and the confidence coverage constraint only ties up one degree of freedom.

In the last post I asserted that the conditional procedure was an exact confidence procedure. Here’s the math. Let *q*_{4}(*α*, *μ*) and *q*_{100}(*α*, *μ*) be the quantile functions of the conditional distributions:

Then the confidence coverage of the conditional procedure is

The conditional procedure had the difficulty that its inferences could contradict the Type I error rate of the design at the first look. However, we can replace the quantile functions with arbitrary functions as long as they satisfy that same equality for all values of *μ* and this will also define an exact confidence procedure. The question then becomes what principle/set of constraints/optimization objective can be used to specify a unique choice.

This sort of procedure offers very fine control over alpha-spending at each parameter value, control that is not available via the orderings on the sample space discussed in the last post that that treat values at different sample sizes as directly comparable. But the phrasing of the (S-2) criterion really strongly points to that kind of direct comparison of sample space elements, and Figure 4 of the last post shows that this is a non-starter. So, to defend the SEV approach from my critique it will be necessary to: (i) overhaul (S-2) to allow for the sort of fine control available to fully general confidence procedures, (ii) come up with a principle for uniquely identifying one particular procedure as the SEV procedure, ideally a principle that is in line with severity reasoning as it has been expounded by its proponents up to this point — oh, and (iii) satisfy PRESS, let’s not forget that.

Pingback: The SEV function just plain doesn’t work | It's chancy.