True AI, in which a physician is presented with a “black box” containing computer programs that are not well-understood, if at all, is coming into closer review and criticism.  Some professional societies are recommending that AI systems be “transparent”. How can a doctor get fully informed consent if he cannot understand the tool he is using well enough to fully inform his patient about it?  The deaths caused by pilots not able to prevent the crash of their airplane flying by computer control because they didn’t understand the risk involved and how to overcome it are two cases in point. The black-box problem emerges for at least a subset of AI systems, including neural networks, which are trained on massive data sets to produce multiple layers of input-output connections.  The result can be a system largely unintelligible to humans beyond its most basic inputs and outputs. But AI has the potential to improve medical care and it will continue to be introduced.    

The program used by the app LOBAK, isn’t true AI, but it is based on an algorithm that may not apply to all people equally.  For that reason, the output of LOBAK should not be acted upon without the advice of a physician, for reasons explained in the disclaimer.  LOBAK is more properly considered an example of computer-assisted decision making (CAD). It primarily makes mathematical calculations for the user according to its algorithm.  Both the algorithm and the methods of calculation have been disclosed either in the app or the website LOBAKAPP.com and they can be fairly easily understood by a user.