Peering through the dialectical fog
Sorry to respond late, and at an obnoxious length, but I've been busy, and still wanted to share my thoughts.
I agree that utilitarianism has a much more convincing account of what matters, but I think deontic fictionalism and two-level consequentalism concede a lot more to deontology than you recognize. As commenters pointed out, it is plausible that truly committing to deontic fictionalism requires one to essentially become...a deontologist. For example, one might believe that no human beings will actually follow rules they think are only instrumental and so in order to actually establish useful rules as normative, we have to all affirm that the rules are justified not just for their instrumental value.
I think this is a bit extreme, but I think it points to a general instability in these sorts of ideas. To me, the key issues facing a deontic fictionalist or two-level consequentalist are: what rules should we adopt, and should we allow exceptions, and if so in what cases? These are the questions that ought to distinguish a two-level utilitarian from a true deontologist, even if the only rules that differ are rules of the form "you should affirm that these rules are normative for non-instrumental reasons".
Either we should basically follow the usual deontological rules, but allow for exceptions, or, the same thing but framed differently, we should have a set of rules that in most cases reduces to the standard deontological rules but not always. The difference between the rules a utilitarian will come to and the usual deontological rules can be thought of in two ways: they will be the standard deontological rules, but with a principled way to handle the objections a naive utilitarian would make when its clear that the first-order consequences of following a rule are bad; or they are the rules you are led to by considering how a naive utilitarian would account for increasingly higher-order consequences of their behaviour.
But this is a self-referential problem: if you start by considering higher-order consequences of your actions, those higher-order consequences depend on the set of rules that you expect people to follow...you want to find a fixed point in the set of rules that, if everyone acting under those rules considered all higher-order consequences of actions performed under that set of rules, would give the best outcome. This makes it a very hard, plausibly intractable problem to solve, at least in a satisfactory way. Deontology is a bad solution--it just imposes a set of rules by fiat--but it is at least a stable solution. Utilitarianism, to me, faces the problem of either picking arbitrary cut-offs of how many levels of consequences to follow or of basically endorsing some set of deontological rules, but then allowing unprincipled exceptions if the lower-order consequences of following those rules seem bad enough.
This latter point of view is basically my attempt to characterize deontic fictionalism/two-level consequentialism, and I think the difficulty is that, until utilitarians have a truly competing set of rules, a realistic two-level consequentialism is always just going to look like either a set of unprincipled exceptions to deontology, or an endorsement of deontology but for different reasons. In both cases, I think this concedes that deontology is right that a) moral decision-making should be guided mostly by following "common sense" rules of morality and b) deviations from these rules will be mostly based on ad hoc reasoning, and will be difficult-to-impossible to expand into fully general principles.
I think the argument I've laid out above is a long-winded (sorry) way of saying that utilitarianism is a better moral theory [i]in theory[/i] than deontology, but it is hard to translate that into a better account of [i]moral practice[/i]--98% of the time, utilitarianism will tell you "follow deontological rules", but it will give you better reasons. This is at least a little ironic, since utilitarianism, by its nature, ought to be more concerned with differences in moral practice. In your two-level consequentialism post, you note that
[quote]Theories differ in the verdicts they yield about hypothetical cases (and certain kinds of “ex post” retrospective judgments). But it would be a mistake to take these as carrying over straightforwardly to real-life cases—or even to various “ex ante” judgments, including judgments of the quality of the agent’s intentions, character, or decision-making. Utilitarians can say much more commonsensical things about these sorts of judgments than most people realize.[/quote]
But ex post retrospective judgements shouldn't really be that interesting to a utilitarian; subjective evaluations of events after-the-fact presumably make very little difference to actual outcomes for human beings unless they inform ex ante judgements in future cases; and if our ex ante judgements are more commonsensical, then are we really adding much that's new?
In a sense, utilitarianism seems to me something like a scientific theory of, say, animal behaviour, that is founded in modern atomic physics and so forth, while deontology is like a theory of animal behaviour founded in, like, "elan vital". The former theory is much better grounded theoretically, but the practical difficulties of applying it might mean that it might not actually be a better guide to studying animal behaviour then the latter. "What is the vital force of this frog compelling it do?" might be a better way to think about how frogs act than "What is the outcome of this completely uncomputable simulation of all the atoms in the frog", even if the former is basically completely wrong in its view of the world, and the latter is basically completely right.
Now, I've stated the most extreme version of the case; I think I can anticipate some of your objections, and I probably agree with them. First of all, deontologists do actually endorse some pretty bad rules; as you note, even though theoretically they could be, a lot of deontologists are not beneficentrists--maybe compared to a sufficiently good version of deontology, utilitarianism would be little more than a tweak, but without pressure from utilitarians, we end up with pretty crappy versions of deontology.
What's more, I interpreted everything I said above in the most unflattering way for utilitarianism: in fact, even when it is computationally intractable, using correct first principles to answer questions by just imposing cut-offs can actually be a very powerful tool; no one would actually analyze frog behaviour by simulating a frog at the atomic level, but thinking about frogs as made of atoms is not fruitless! Frog behaviour is influenced by biochemistry, and biochemistry reduces to atoms.
So I don't actually endorse the point of view above; but I think it does capture something true about the difficulties of actually having a practically useful utilitarianism, and why theories like two-level consequentialism defang the deontologist critique by actually ceding a lot of ground to them; that's obviously fine, but I think sometimes you write as if, having shown how the two theories are more compatible than one might think, deontologists should think about moving in a more utilitarian direction...but there are not-crazy reasons to argue that your synthesis actually is a bigger step in the direction of deontology!
Person Affecting views:
Having said a lot above, I'll try be more concise here. I agree there are lots of problems with narrow person affecting views, but I don't think the only solution is to adopt impersonal reasons and the idea that one can be benefited by being brought into existence--Michael St. Jules has some comments in the Epicurean Fallacy post that I think point at other ways to get around at least some of those difficulties. I think all attempts to save the spirit of person-affecting views still don't satisfy Independence of Irrelevant Alternatives, for example, so I don't mean to say that these solutions are equally satisfactory as adopting an impersonal view, much less that there are advantages. It's just, the procreative asymmetry really does feel intuitive to me, so I think it's worth keeping an open mind.
The strongest objection to longtermism is skepticism about the extent of our knowledge. Would the world today be better if people centuries ago had been able and willing to shape our present? I'm inclined to say no; moral and scientific progress has made us, the people of today, better at guiding today's world than Ghenghis Khan, Queen Elizabeth, or whoever would have been from their temporal position. Similarly, I suspect people hundreds of years from now will rightly think the same of us. As we are better qualified to shape our present than our distant ancestors, so will our descendants centuries down the road be better qualified to shape their present.
I find Nozick's experience machine intuitively powerful: I don't think I would want to be plugged in. This moves me somewhat away from hedonic utilitarianism towards preference utilitarianism, but preference utilitarianism has some other issues I am uncomfortable with (mainly to do with defining what preferences count). How do you think about the experience machine - would you plug in, does this count against hedonic utilitarianism do oyu think?
An objection that moved me away from utilitarianism is a variant on the demandingness problem. I've often found versions of a two-level utilitarianism to be persuasive in solving certain problems (like Railton's paper on personal relations and alienation). But it seems to me that even here one cannot get away from the demandingness problem: given the state of the world it's not clear that it would be best if we formed many personal relationships which took up our time and resources preventing us from doing good elsewhere. Attempts to square these two priorities by utilitarians often feel squirmy.
It's been my belief that one's meta-ethics matters in how serious we should take the demandingness problem. And i've always found most meta-ethics associated with utilitarianism to be too subjectivist to persuade one on this point. It seems as though you'd need a more firmly objectivist meta-ethics if you're going to be able to justify the kinds of moral demands that a utilitarian outlook recommends.