I had tautology once, but a single shot of penicillin cleared it all up!
Really just nothing I can say to defuse this, is there --?
A measurement needs a metric. What is it and how can we know that it is THE metric in use. If I say "I can know if something is a meter long or is longer or shorter," ultimately the metric is the standard meter which is an actual physical object in France. The valuation of options or actions seems not subject to an objective metric. If it has no metric (one that can be used by people other than yourself), your theory will remain irrelevant.
You also need to work on explaining it more plainly, but I suspect its tautological aspects would simply be brought into view more plainly if you did so. For most of us, I gather, it seems like so much "sleight of word."
Also, ethics and morals are philosophy. Psychology is a soft science. The best psychology can do is to describe the process people use to arrive at ethical choices, but that is a FAR way from determining what's right and what's wrong.
I was taught that the reason ethical arguments never seem settled is that ethical disputes are about attitudes, not facts. Until facts can be agreed upon and a consistent way of talking about them is devised, there will never be an absolute ethic and it will all be about irresolvable disputes.
Unseen if choice is measurement of values determined for the sake of accuracy, then humans value accuracy above all else. If so, at the core of measurement, accuracy can then be nothing other than the purpose of it.
People don't tap into the reality that humans value everything for the sake of accuracy, and that is a very major idea. If that is the case, choice is math. Error is measurement. Nothing else.
The simple explanation is this alternative to the notion of free-will:
Humans have a need to act as finite beings, but an uncertainty as to the proper course of action. As acting is necessary for a finite being, the means of selection of actions has evolved so that everything is assigned value, and those values are subsequently compared/weighed to determine the appropriate course of action. The means through which values are obtained is through the perceptions. The perceptions can be inaccurate, as can be the weighing. Values can be mis-measured. They are also subject to constant re-valuation as new data is obtained. Interference with the delicate chemical processes of the brain can also cause the measuring process to go awry. The mind operates on a delicate chemical balance.
Society benefits from metrics. However I don't see us as evolved enough to have a standard metric. All the above reasons I just stated about what goes awry shows that the perceptions are unreliable and the mind is prone to miscalculation. It is pretty easy to call bad choices bad math.
But you are right, and this is really essential. If we are all measuring, we need a metric. But we don't and can't have one. My proposal is that we need to do the equivalent of increasing the sample size in an experiment.
We need to get rid of scorn, shaming, and all that nonsense which encourages people to cling to unnecessary levels of self-sufficiency. I am not saying eliminate self-sufficiency altogether. People simply need feel okay with being wrong, and not shamed about it. That would enable humans to dismiss inaccurate ideas easier, because they are clung to on account of fear of shame (Something we know in the behavioral sciences to be a well-tested fact). Trusting a singular vantage point, and a singular set of perceptions leads to bad accuracy.
Most of this is simply putting a number of already understood facts together in a way that it should have happened a long time ago. Hell, I was shocked to find out how much of this the behavioral sciences already knew yet didn't put together, because finding this out was what lead to my deconversion before beginning my studies in that field. I found it absurd for God to punish people because they are bad at math. I found out that Albert Ellis, and others were already paving the way for this to become common knowledge some day, but never unified this all.
So, what you are proposing is a social change as well as an analysis of decision making: a world without morality or ethics (both of which would have to go because of the scorn and shame thing).
At any rate, your system is still a big tautology and it is based upon assumptions most of us won't make.
What do you see as assumptions?
Could that be due to potentially not knowing what is substantiated already in psychology?
What is the exact nature of the tautology here? And what are the assumptions? Without that I can't know what you are even objecting to.
Oh and right and wrong can still be there. Not good and bad, but right and wrong for sure. They are tied to having more or less accurate values. Tapping into the idea that accuracy is the underlying purpose of values and subsequently choice which we all know is based on weighing values, we can now understand that it is appropriate we value accuracy as it is the end of all means relating to choice.
Now we get back full circle, and you understand why I say that there is a difference between right and good though. And now you understand what I was saying earlier about shame and scorn.
You don't have to believe my hypothesis, but at least you understand what I meant when we started this discussion. That is really what I was going for.
Unseen if choice is measurement, then humans value accuracy above all else. If so, at the core of measurement, accuracy can then be nothing other than the purpose of it.
This is an assumption and it's one of those assumptions most of us won't agree to. So, your "if" remains an if and your theory is a huge contingency.
What part is the assumption? If it is "humans value accuracy above all else", that part was just poorly worded.
The hypothesis wouldn't rest on what we value as that varies from person to person.
Cognitive behaviorism has substantiated that values are determined through the perceptions. It is also understood that values are drawn from the identifiable properties of things... I just don't understand what you are objecting to as an assumption beyond things that wouldn't make or break the hypothesis.
"If I cooperated last time, you will cooperate this time with probability p.
If I defected last time, you will cooperate this time with probability q. ...
What remained was Generous Tit-for-tat, a more cooperative strategy that incorporated an idea of forgiveness. "In this strategy if you cooperated last time then I will definitely cooperate this time, so p=1. And if you defected last time I will still cooperate with a certain probability. So I will always cooperate if you cooperate, and I sometimes cooperate even if you defect, and that is forgiveness." This probability of forgiveness was just the probability q in each strategy. The most successful strategy to emerge from the tournament had p=1 and q=1/3. ...
This was a dramatic difference to Axelrod’s old tournaments where Tit-for-tat reigned supreme. Even more surprising was that the system carried on evolving, the society becoming more and more cooperative, more and more lenient, until it was dominated by players who always cooperated. "And once you have a society of Always Cooperate it invites the invasion of Always Defect," says Nowak. All it takes is a few mutations in the strategies as they reproduce and the whole cycle will start again. ...
"It is very beautiful because you have these cycles of cooperation and defection." Nowak's first observations in the field have since been confirmed by many other studies over the years: cooperation is never fully stable. "So we have a simple mathematical version of oscillations in human history, where you have cooperation for some time, then it is destroyed, then it is rebuilt, and so on." "
Indirect reciprocity and Reputation
"... our assessment of a player's reputation is a number, r, which we set to zero until we observe them playing the game. ...
A more complex social norm might have a player's reputation increase (or decrease) only if we see them help (or not help) players with reputations larger than a certain value. ...
... my strategy might be to only help those recipients whose reputation I judge to have at least some value, k (so I would help someone if their reputation r≥k). ...
... the most successful, in terms of how long they remained dominant in the population, were those strategies that behaved cooperatively and discriminated on the basis of their opponents' reputation, that is, those with k≤0."
This is all a bit difficult and fruitless until I unveil the central idea. I'm hoping it will all make sense then. This will take a few months, after I've filled in the rest of the pieces. What I'm working on at the moment is "reasons to be good". I believe that we all know from experience that it's better to be "good". We just need to analyze the reasons why this is so. We can describe the neurological basis of empathy, for example, but this doesn't justify empathy, it just says that evolution says we should be empathic. However, it does somehow back up the argument. It says that empathy is there for a reason.
I heard this exchange in an ethics course I took:
Prof: You have a question?
Student: Yes, WHY should we do what's good?
Prof: Because there's nothing better to do.
WHERE does evolution say we should be empathic? Evolution progresses based on death!
RE: "Evolution progresses based on death!"
Evolution progresses on surviving death, to pass on your survival traits to your progeny.