I was inspired by this thought because of a lecture given by Sam Harris and the following discussion between him and Richard Dawkins.
Do you think science can say anything about what morality is.
I think it actually can and on the contrary Religion can't. It is an argument I hear very often by religious people. I think religious "morality" is very immoral because it is absolute. Morality has to be relative, has to be discussed and reasoned.
Furthermore I act morally because I want to. A religious person acts "morally" because he or she wants to be rewarded for it in the afterlife. I think this is very selfish and thus immoral.
You have to be moral because you want to be and not because you're told to be so because that in it self is immoral.
To act moral means to be ourselves, to be human and to be responsible for our actions.
Now the latter argument can be discussed or argued against since science has shown that there actually might not be a free will, thus it is questionable wether we can be held responsible for our actions.
What do you think? What can Religion say about morality?
Also, why is everyone freaked out about trying to make a scientific moral code? Just like when people talk about AI, just because it always kills you in the movies, doesnt mean it HAS to be evil. Since the morality that any religion has, was most likely the best ideas of how to treat people written by some idiots thousands of years ago, I dont see how science could do ANY worse.
Science (and AI) are still just tools to be used by us, or abused. Just as there's no inherent evil in science or AI, there's no inherent goodness in it, either. I don't know what a "scientific moral code" could be, sans fallible human design or judgement. It's scary to me to think that anyone would claim one particular, absolute set of mathematical or scientific rules or principles could be used as an infallible or perfect tool to prescribe moral behavior. Describe it yes, but prescribe it, no.
Sure, science increasingly informs our decisions/judgements. But take a real, current example of "moral" debate these days, like abortion. In my opinion, science already helps to inform us about characteristics of a fetus and its awareness of pain, or consciousness, and we'll learn more and more. But it's still ultimately up to a human to decide what weight to put on each cold, hard piece of "data" that's attached to a potentially living human being, and I just don't see how (e.g.) AI as designed by other humans could be trusted to make such a decision for us.
I just don't see how (e.g.) AI as designed by other humans could be trusted to make such a decision for us.
A recent experiment showed that artificial evolution mimics biological evolution. So artificial entities can evolve morality, just like we did. But with enough research I believe that we can develop AI's that don't have the human failing of jealousy, prejudice, hate etc. And such AI's could make impartial decisions that are for the better of everyone.
In his Void Trilogy, author Peter F. Hamilton had this AI named ANA, well it was a technological singularity. It's personality was the sum of the personalities of all the people uploaded into it. By uploading I mean that in the books, humans could leave their bodies & live in a software world(& they could go back into a body, or meat bag as they called it). So whenever someone was uploaded into the ANA, their personality became a part of the overall ANA personality.
I don't think we can have the ANA anytime soon(but it would be cool if we could), but we can build systems that can help us a lot.
with enough research I believe that we can develop AI's that don't have the human failing of jealousy, prejudice, hate etc. And such AI's could make impartial decisions that are for the better of everyone.
But surely that's the point. Those human failings are a part of being human, and must be taken into account when humans make moral decisions. Why do we need computers or machines to do the job? Judge Judy not good enough for you?
Emotions don't usually help the cause of rationality and biases(which you didn't) are opposed to it. And us humans can't really help getting our emotions & biases in the way.
There is a small town somewhere in Texas where all the patent trolls file their trolling cases. Why? Because the jury pool is biased & is known for siding with the patent holder.
Over here in India, cheating in exams is a very common thing. Most of the times when a kid is caught, nothing happens. Nothing! And it's not even that someone is suspected of just peaking at someone else's paper, I've seen scores of kids allowed to continue with their examination after chits have been found on them. At most they are told to not give the paper any further if it was late in the exam or if it was early they are given a fresh answer sheet. I saw one guy had photocopied several pages from the book to a very small size & had sort of a cheating booklet. No action was taken against him when he was caught once other than not letting him complete his paper. And the reason so many of these cheater get away is because the invigilators don't want to screw with the kids career. The human factor comes into play.
So yeah, I think it would be better if we could have machines to aid us with our decision making by seeing things we couldn't or wouldn't see & by looking at things from various perspectives in an unbaised way.
Akshay - greetings to India.
A recent experiment showed that artificial evolution mimics biological evolution. So artificial entities can evolve morality, just like we did.
It would be interesting to see what moral values the artificial entities came up with. The results would, I'm sure, be very illuminating. We might be forced to examine our existing moral values in a new, fundamental way - and that must be a good idea. Still, ultimately - the selection and value-judgements of particular moral qualities will always have to be made by humans.
Texas ... India
Those are two interesting cases which you mention. Obviously, there is a blatant moral failure in both cases. What would be a solution? We have two possible choices of adjudicators - humans, or, like you say, machines. Humans are sometimes corruptible. Machines are never corruptible, if they are programmed and operated correctly. Also, like you point out, a machine could cope with many different scenarios, inputs, rules etc., like some kind of moral spreadsheet(!). So yes, I can see a case for using a machine to make moral decisions if there are no trustworthy human beings available; or to do a theoretical investigation. However, that still leaves the question of how to originate the moral rules in the first place. I contend that a machine is unable to do this - it is a purely human problem and not scientific or mathematical. I believe it is not too hard - just a matter of observing real life, and moral situations, and moral people, and picking out what works. Further, I contend that we can select a small, axiomatic set of morals which everyone [or at least, ALL "good", "decent" people] believes is good and always valid - an objective moral foundation.
Why do we even need an objective morality? Christians talk about it a lot. Atheists don't seem at all interested. The atheists are right - 99% of the time we don't need one. But the Christians are more right - we need it for the sake of moral solidity and philosophical soundness. Without one, we really are on shifting sands. There's no getting away from that. Without objective moral values, how do we properly prove that our actions are morally valid? I really would like an answer to that. I've never found one. I just don't believe it's enough to say, "because I think I'm right" or "because this group says so" etc. It has to be based on satisfactory foundations, just like every other area of knowledge.
There are rare situations where moral objectivity is absolutely vital in order for daily life to carry on. When I was younger, I was in that situation. Suddenly, I was unable to trust any of my own moral decisions and was forced to start again completely from scratch. I didn't trust anyone else's either - they were no more reliable than my own. The only solution was to find some abstract moral values which I could be satisfied were self-evidently true and which were easy to apply. I came up with "absolutely anything is allowed unless it harms others". Being an anarchist, this satisfied me. I didn't think of it - it was around already. Anyway - just that one rule was sufficient to allow me to get by OK until I could find a few more - which, of course, are necessary to do a proper job.
Other people for whom a set of objective moral standards is essential, could include newly-deconverted Christians. They have suddenly lost all their old moral framework, and need to find another which they can truly rely on. An individual, or group, is always fallible. The only solution is something universal - which by definition, no decent reasonable human being will disagree with. Hence, by a process of elimination, and from a purely human perspective, we cannot make an error in our particular choices of fundamental moral values if we take this approach. We can rely on them with confidence. Hooray! Just what we were looking for.
Akshay - what do you say to that?
Machines are never corruptible, if they are programmed and operated correctly.
I agree with this statement, but I'd also add that the if in it is a really, really big if. I'm fearful of people's feelings that there can be a perfect version of morality, and that if there is a perfect version of morality then some "objective machine" should be able to determine it.
Firstly, how can one ever know that a machine's programming has become and will be eternally perfect? Secondly, how can one know that a machine's owner will never have hidden, selfish motives?
Take for example a scenario where several civilizations living on different planets decide to trust one, so-called perfect artificial intelligence built by one of the planets. When could we really ever know that it's perfect, and know that we wouldn't just be forfeiting our freedom to it?
When someone keeps saying that reliance on some perfect, unquestionable version of reality will someday be possible, I just don't see yet how this perfection could ever be guaranteed, or who of us should be entrusted to vet it and grant absolute power to it. It just sounds to me like another possible, potent version of dogmatic religion.
Btw, a new TA discussion "Atheists cannot be moral" brought me back to this discussion.
What I mean is, the list of basic axiomatic values has to be as small and restricted as we can get away with, to ensure its universality. After all, as you may properly point out, humans disagree on just about everything.
Then, if we choose a value from this list, we can be confident that the value we choose will never be wrong. [sorry for 55 edits.]
Perhaps science can tell us something about why we act in a moral way.
It can, there are many books on the evolution of morality. Starting with Dawkins' Selfish Gene.
Gustaf, do you have a link to the original lecture and discussion? I am very interested in this subject.
Getting a copy of Sam Harris' book "the Moral Landscape" is your best bet. The talk is about it.
This topic has been the subject of some decent scientific study.
For a good summary, pick up Michael Shermer's "The Science of Good and Evil". Also, go to a science news aggregation site like Sciencedaily and search for "morality", like this: http://www.sciencedaily.com/search/?keyword=morality
Here's a related search that you may find turns up interesting tidbits: http://www.sciencedaily.com/search/?keyword=liberal+conservative+gene
In general, since we are social animals, behaviors consonant with cooperation and fairness may have been selected for during our long evolution.