Discussion in 'Spirituality & Sexuality & Philosophy' started by Heisenberg, Oct 27, 2015.
Fair enough. I'll keep that in mind going forward.
What i do take issue from is bigotry of any form.
What makes a person, treat another person like shit, because they have a different belief?
The problem I get when i raise the question, (or demonstrate the issue ) is that it is persons unaware of their own bigotry that always respond. Lol
I have never started a FE thread.
Look back through my posts and pay more attention to the specifics of what i have been saying.
It is these same people that now either ignore me, or tag me in FE posts.
I feel the need to explain using this thread as an example.
My first post was me, implying that there is always a different answer that might not have been considered.
It was taken as an attack, because it did not follow the rhythm of the premise of this thread.
Even though it was valid, it was disqualifed because of personal opinion.
My second post was a poke. I know that there isnt an argument from narcissism.
I think irl ego and ignorance is where most arguments come from.
@Heisenberg I've recently discovered just how far robotics has developed; I saw a video of a Russian robot doing army drills and construction activities; there is a video of a robot developed in Saudi Arabia that is strangely human like, with facial emotions and intelligent cognitive abilities. I don't know if it was programmed to answer the guys questions for the presentation or not but it seemed to be thinking for itself.
Here's my argument; please point out any fallacies and feel free to argue against it.
The human consciousness is produced by the input of electrical signals in our brain, therefore a robot that also receives data from electrical signals may develop a consciousness.
Fallacy aside for a moment. Never going to happen and that's just personal opinion based on no research.
But have a read:
Robots/Androids/AI will never have consciousness like a human, because they'll be wired differently.
But to dismiss out of hand that they'll never have self awareness or the ability to contemplate questions without enough data to arrive at firm answers or the ability to do art is short sighted, IMHO.
They'll have a consciousness, it just won't be like ours. We need to start working on ensuring that it has a moral and ethical structure that will respect us, as primitive as we will appear to be to a consciousness that will no doubt be able to think with trillions of calculations per second.
Thanks for the read Venus very good article, but it does not dismiss the possibility of developing a more advanced intelligence, what if there is an I Robot scenario where the manufacturer of robots develops a program specifically designed to police the human race, or worse exterminate it?
Then we're in serious trouble
*Seriously tho.. I don't allow my mind to have such ponderings re: A.I futuristic capabilities, unnecessary anxiety
Ignorance is bliss
Fortunately, true AI is a long way off.
It would be much more prudent to stay awake worrying about what men program computers to do. An Air Force Colonel has been leading a project to reduce battlefield rules of engagement to code, since the 1990s. In other words, a program that tells the machine when it's okay to kill humans, based on its program and WITHOUT HUMAN INVOLVEMENT.
There is no problem with the form of the logic, however it is an analogical argument. You are saying that because two things are alike in some way, they must also be alike in other ways. This may or may not be true. So while we cannot fault your argument for being formally fallacious, we still have to judge if it is a weak or strong analogy.
We can argue that it is weak because "consciousness" is still a bit of an ambiguous term. So we would first ask you to offer an operational definition of the term. Next we would want to know the evidence for the claim that consciousness is produced by electrical signals. A radio operates by electrical signals, yet we would not consider it conscious. Obviously there is something more than the mere presence of electrical signals. Is it the number of them? The type? How they interact with each other? How they are processed? Your premise leaves out quite a bit and does nothing to demonstrate that the electrical signals involved in a robot are anything like those utilized by the human brain.
So, you have made a weak analogy, more specifically, a question-begging analogy. You may be correct, however your argument, as it stands, is not very compelling. Even if we steel man it by changing it to say, "The human brain processes electrical signals in such a way as to produce consciousness. So if a robot could also process electrical signals in this way, it would also produce consciousness." But at that point it's basically a tautology. It's like saying "A human manipulates vocal sounds in such a way as to produce English, so if a dog could manipulate its vocal sounds in the same way, it would also be able to speak English." While the argument is technically true, it doesn't give us much hope that dogs will someday be speaking English.
My dog is very good at telling me what he wants. However, I'm often stupid enough that I fail to understand him.
There's lots of logical fallacies that it's confusing to me. I see the illogic, but remembering which one it is takes practice which I've not done.
Is there a shorter list that covers most bases?
I know it takes lots of practice to get them down pat, even then it's hard not to stumble upon them. So far I've found these to be the most common: ignorance, incredulity, begging the question, special pleading and if you want to argue about God you must be aware of god of the gaps and argument from design.
I'm not sure there is a short list, but here's the one that got me started. They've actually added a lengthy introduction to it since I stumbled on it. I think the fallacies are mostly listed in order of prevalence, but that depends a lot on the types of subjects you argue and the people you deal with. For example, if you argue about psychic powers and ghosts, you'll hear an abundance of special pleading. If you argue about alternative medicine and healthy eating, you'll hear tons of appeal to nature fallacies.
Seeing the mistake is the most important part, knowing the name of the fallacy can be a trivial detail. But other times, knowing the name may help you remember the structure of the error, which then helps you better deconstruct an argument that seems wrong but you can't quite put your finger on it.
Name this fallacy:
My uncle tells me I shouldn't be afraid to ride the wooden roller coaster at Six Flags. He says that in 30 years no one has ever been injured and the ride has never had a major malfunction. To me, that just means the coaster is overdue for a tragedy. Something bad is probably going to happen any day now.
Mechanics at Six Flags Discovery Kingdom have been locked out since May 2, preventing unionized workers from servicing the park’s rides.
I believe it's ignorance but for a different reason than grandpapy. The boy is taking general data and making it out to mean something specific. He is construing statistics and the odds which show that the ride has been perfectly safe for 30 years to mean that the odds are now in favor of an accident. It would be like me who has never got a royal flush in poker going all in with poker J 10 suited believing that I'm due to get a royal flush, when the truth is the odds are still 650,000:1.
My remark has fallacy's based in bias, opinion, experience and fear.
So take that into account.
Separate names with a comma.