|
Post by johnreiter902 on Jun 30, 2023 12:45:11 GMT
Oh, it is. I keep wondering what might happen if one day, an AI passes the Turing Test and suddenly, they announce they've converted to some form of religious belief. I think the question is, which one? Sure, Robert Silverberg once wrote a story in which a robot was elected pope (" Good News from the Vatican"), but it needn't necessarily be Christianity. If it was designed in China or Japan, it might be Buddhism, Shinto or Confucianism instead. I posed this question to an open-minded Catholic friend and she said that if the AI or sapient robot earnestly believed in her faith, she would have no hesitation in accepting them as a fellow believer with an immortal soul of their own. She then said that if the robot or AI was conscious and capable of moral choice, then that could be the basis on which it, she, he or they be said to have a 'soul' or not, not whether they were organic or manufactured. A Buddhist work colleague agreed, and said that if they were a virtuous being and made deliberative choices and undertook right moral conduct as his faith defined it, then yes, Buddhist robots and AIs might one day exist. How is that relevant? Has Brainiac made deliberative moral choices of his own and did he choose evil, out of a belief that his twelfth level intellect 'entitles' him to dominate and experiment on beings of 'inferior' intellect? The existence of the virtuous Brainiac A indicates that something like that may indeed have happened, because that Coluan AI made benevolent choices and so does the Red Tornado. Based on the criteria that my Catholic and Buddhist friends contemplated above, that all conscious and sapient beings, organic or manufactured, have souls, then if they therefore consciously choose evil and deliberately harm other sapient entities, at the end of their existence they might end up going to the robot/AI version of 'hell.' Inorganic souls could be quantum halo residua, in that case. It'll be interesting to witness the philosophical arguments when that happens in our own particular probability sequence. My own opinions align very closely with your friend's. Since we cannot detect whether a soul is present or not, we are required by God to presume it is present unless told otherwise. If the AI is able to make a sincere profession of faith, then there is no reason that they should not be considered a member of the church.
The real question for me is, is the AI truly sentient, or are they simply following their programming? In Brainiac's case, I think the proof that he is sentient came after he learned that the computer tyrants had been overthrown. At that point, he abandoned his original programming, and began to set his own goals.
|
|
|
Post by redsycorax on Jul 1, 2023 1:14:11 GMT
Well, I'm open-minded and inclusive when it comes to sentients rights. If our descendants encounter aliens who want to settle and co-exist here, when AIs and robots pass the Turing Test, or if humans develop additional attributes not already covered by antidiscrimination laws, I think that it should be expanded to cover all those instances. In the case of the All-Star Squadron, Robotman I faced a case designed to evaluate whether the fact that he had an inorganic body meant he did not have civil rights. Cyborgs would clearly be a different case, given that they're partially human, although perhaps disability jurisprudence might be applicable to them in some circumstances and contexts. In Brainiac's case, the matter is complicated by the fact that he had a benevolent counterpart, who clearly would have rebelled against the dictates of the computer tyrants from his inception. He decided to use his technology to capture and confine malignant sentient species. Brainiac B ('ours") did not. Two sentient AIs, two different decisions.
Which does raise some questions about the Legion of Super Heroes constitution. Do its anti-AI provisions only apply to non-sentient and preprogrammed AIs, or does it also apply to sentient AIs? Because if the latter was the case, it would strike one as odd, given that antidiscrimination legislation would presumably have been activated to include them when sentient AIs and robots became possible on Earth and/or when Earth's interstellar exploratory vessels encountered sentient AIs and robot species.
|
|
|
Post by johnreiter902 on Jul 1, 2023 2:28:52 GMT
Which does raise some questions about the Legion of Super Heroes constitution. Do its anti-AI provisions only apply to non-sentient and preprogrammed AIs, or does it also apply to sentient AIs? Because if the latter was the case, it would strike one as odd, given that antidiscrimination legislation would presumably have been activated to include them when sentient AIs and robots became possible on Earth and/or when Earth's interstellar exploratory vessels encountered sentient AIs and robot species. A lot can happen in a thousand years. We know there was at least one AI revolt (in the years 2165) from Rip Hunter #27. There may have been times in history where AI's had many rights, and times when they were heavily restricted.
We should not assume, because we live in a very permissive and inclusive time in history, that this is the way of the future, or that we will always continue in the same social direction. It is also a fallacy to assume that because a time in history is very tolerant in some ways (for example, equal rights for all alien species) it must be tolerant in ALL ways.
|
|
|
Post by jonclark on Jul 1, 2023 4:24:55 GMT
Different "threats" lead to different prejudices. If you have a single artifical but sentient being that is largely self-contained (your typical robot/android) it might not be as frightening as a sentient machine network or cloud based intelligence (SkyNet)
Whether a society grants somwone like Star Trek's Data rights or not, the android can only be at onw place at a time. If you grant the same rights to something that simultaneously take actions in Coast City, Metropols, Gotham, Moscow, Berlin, Tokyo, .... (let alone possibly on Rann, Rimbor, and Winath as well) then that being would be able to affect things on a larger scale than an ordinary human (or even a large group of humans).
I can see people being less afraid of treating Red Tornado like another person than they would of granting freedom to a sentient Internet that knows all your secrets and can control anything connected to the network (you self-driving car, the life-support in hospitals, military drones ...)
|
|
|
Post by redsycorax on Jul 1, 2023 4:27:40 GMT
I intend to explore this in a forthcoming LSH 3023 story, where several Legionnaires go back to the time of the anti-telepath and anti-precog Great Persecution in their past.
|
|
|
Post by redsycorax on Jul 1, 2023 4:36:01 GMT
Now that's an idea, jon ... what does the Red Tornado think about the morality of Brainiac's behavior himself? Presumably the JLA doesn't judge him on that basis but what about others who have been victimized by Brainiac's depredations? On the other hand, Drax is organic and a twentieth level intellect, so that may come into how androids are perceived.
|
|