What objective am I trying to maximize? Who knows, but definitely not a completely utilitarian maximization of the expected utility of humanity.

I’ve heard some utilitarians justify actions that improve their personal utility through making a claim about how their personal utility will affect their ability to maximize global utility. The argument generally goes, “I would be so unhappy doing X that it would negatively affect my ability to improve global utility in other ways, so it’s actually better for me to not do X.” For example, I would be so unhappy in more impactful career X that I would become an ineffective donor, which would actually decrease global utility, so actually I should stay in my less impactful career.

If you allow yourself to make the estimates of how your own personal utility would change (and who’s going to say they think know your personal utility better than you?), this can pretty much be used to justify anything. “I have to spend this extravagant amount on remodeling my apartment because otherwise I just wouldn’t feel happy enough to help others!” I know in practice people try to be reasonable about this, but it still seems contrived and roundabout to me to frame your personal preferences in a utilitarian light when they’re really not.

So, I view myself more as trying to maximize a weighted sum of $\alpha U_p + (1 - \alpha) U_e$ where $U_p$ is my personal utility and $U_e$ is everyone else’s, and I’m okay with the fact that $\alpha$ is much higher than it would be if I cared about everyone equally. Also note that $U_p$ and $U_e$ are not uncorrelated.

You can just continue making it more and more fine-grained, maybe something like $\alpha_p U_p + \alpha_f U_f + \alpha_{fr} U_{fr} + \alpha_e U_{e}$ where $U_f$ is family utility and $U_{fr}$ is friends utility. I think it’s completely fine to have different weights for different people! And is more like what people actually do in practice. My different weights on people’s utilities come primarily from my personal closeness to a person. But I don’t, for example, value strangers of say my neighborhood more than strangers of another nation.

I’m sure this kind of ad-hoc philosophy leads to all sorts of formal, problematic philosophical issues, but I’m not sure that artificially construing all your actions to fit a formal theory is much better.

Edit: This paper “Learning a commonsense moral theory” takes a similar view. The idea of a recursive utility calculus better formalizes what I was trying to get at above.