The July/August issue of Discover magazine carried an extensive four article look at Artificial Intelligence (AI) in medicine. By the way, I still object to the misnomer artificial intelligence because the word artificial connotes inert and unresponsive which digital intelligence (DI) certainly is not.
The first article makes it clear that DI will be beneficial in detecting disease earlier than any human being could possibly do. That much is already happening. DI far exceeds the human eye in scanning and imaging technologies for early detection of cancers and heart disease. This obvious advantage is exponentially compounded by DI’s ability to instantly compare a scan to vast databases of similar scans. Furthermore, in the near future, nanobot technology will produce targeted internal scans and images that are almost unimaginable today.
The second article reveals how DI is starting to overcome the shortage of mental health and medical consultations, especially in rural areas. The technology with which you may already be familiar is chatbots. You have probably seen ex-Olympic swimmer Michael Phelps on TV commercials having a heart-felt conversation online about his anxieties and insecurities. He may be getting good, helpful, and reassuring advice from the other end, but he is not talking to a human.
In the third article, DI’s ability to make ethical decisions is explored. Of course, DI will be no better at making ethical decisions than we are. After all, DI, now and in the future, will be directly responding to the ethical algorithms fed into them by us. In the future, DI will have to make the decision to send the train down track A or track B. Sending the train down track A will kill thirteen octogenarians. Sending the train down track B will kill the Great-grandson of Albert Einstein. The only real advantage DI has in making an ethical decision such as this is the speed at which it will be able to make it. Either way some will see those decisions as wrong and others as right. Garbage in will always be garbage. The programmers and the programmer’s bosses will still be liable, not the DI that executes the command.
The fourth article expands on the third article by pointing out that any gender or social bias fed into DI will be replicated unless detected and weeded out.
Weekly Science Bomb 4!
Write On!
You said, “…we cannot ‘give’ them feelings. That much is, I believe, beyond us.”
Feelings and desires have definite patterns of neural activity. If those patterns are put into artificial networks, there would be artificial feelings. I find that experiment to be unethical.
Yes, AI already learns, I should not have put that in the sentence about making generalizations that some may find uncomfortable.
The question is whether or not artificial feelings approximate human feelings, and I have already stated my opinion.
Artificial means man-made. Our neurons have axons and dendrites which fire electrochemical impulses, either on or off, like digital electronic signals.
Why not stop the trains and solve the problem before a crash?
Will AI make more AI’s? Will they learn and make accurate generalizations that we condemn as biases?
We are descended from little worms with the first nervous systems over 540 million years ago. Behaviors evolved to survive and reproduce. Feelings, simple learning, and intuition evolved first; logic came much later. If we give AI feelings and reproduction, that could be a problem for us.
Jim:
You said, “Artificial means man-made.”
I accept that as the primary connotative meaning.
You said, “Our neurons have axons and dendrites which fire electrochemical impulses, either on or off, like digital electronic signals.”
I am glad you mentioned the unavoidable similarity. Our “system” functions much the same way as the digital system. Off or on, like a light switch is a good way to describe both systems. Let’s call the switching from off to on (or vice versa) a transaction. The key word in your statement is “like”. How alike are these two systems? In spite of the human inability to match the speed and quantity of transactions, I believe we will retain and maintain a qualitative edge.
You said, “Why not stop the trains and solve the problem before a crash?”
That would be the optimal, and the digital system is way more likely to achieve that optimal result. Go Digital!
You asked, “Will AI mane more AI’s?”
Yes! We are already teaching them to reproduce themselves. I use the term “themselves” advisedly!
You asked, “Will they learn?”
The term “they” is repeated only under advisement. I believe they will repeat but not learn. Much as I often did in school.
You also asked if they will “…make generalizations that we condemn as biases?”
Yes! And that is going to be the source of a continuing problem.
You said, “If we give AI feelings and reproduction, that could be a problem for us.”
They will reproduce and replicate, as mentioned above, but we cannot “give” them feelings. That much is, I believe, beyond us.