The July/August issue of Discover magazine carried an extensive four article look at Artificial Intelligence (AI) in medicine. By the way, I still object to the misnomer artificial intelligence because the word artificial connotes inert and unresponsive which digital intelligence (DI) certainly is not.

The first article makes it clear that DI will be beneficial in detecting disease earlier than any human being could possibly do. That much is already happening. DI far exceeds the human eye in scanning and imaging technologies for early detection of cancers and heart disease. This obvious advantage is exponentially compounded by DI’s ability to instantly compare a scan to vast databases of similar scans. Furthermore, in the near future, nanobot technology will produce targeted internal scans and images that are almost unimaginable today.

The second article reveals how DI is starting to overcome the shortage of mental health and medical consultations, especially in rural areas. The technology with which you may already be familiar is chatbots. You have probably seen ex-Olympic swimmer Michael Phelps on TV commercials having a heart-felt conversation online about his anxieties and insecurities. He may be getting good, helpful, and reassuring advice from the other end, but he is not talking to a human.

In the third article, DI’s ability to make ethical decisions is explored. Of course, DI will be no better at making ethical decisions than we are. After all, DI, now and in the future, will be directly responding to the ethical algorithms fed into them by us. In the future, DI will have to make the decision to send the train down track A or track B. Sending the train down track A will kill thirteen octogenarians. Sending the train down track B will kill the Great-grandson of Albert Einstein. The only real advantage DI has in making an ethical decision such as this is the speed at which it will be able to make it. Either way some will see those decisions as wrong and others as right. Garbage in will always be garbage. The programmers and the programmer’s bosses will still be liable, not the DI that executes the command.

The fourth article expands on the third article by pointing out that any gender or social bias fed into DI will be replicated unless detected and weeded out.

Weekly Science Bomb 4!

Write On!