2025-06-15
AI is simply a label that may mean many things to many people.
Some, such as Mark Playne of NOT ON THE BEEB, and Martin Geddes of Future of Communications, have learned that to get their AI to function responsively to their needs, they may have to train it.
Others less critical will use AI as is and believe all it tells them - they may find illusion or disappointment, depending on how their AI was trained by its developers. There is nothing to say that the training materials used were well-founded, coherently respectful of truth, and appropriate to all the usage to which the AI could eventually be put.
For my own part I suspect that where the subject matter is straightforwardly factual (such as the Statute Book) AI is likely to be accurate in quoting it, but where value judgements are concerned ("desirability", "benefit to mankind" or philosophies that may be "damaging"), we should look upon its output with a critical eye and ask it to explain itself as necessary.
If we don't, then we risk falling prey to all the assumptions (implicit or explicit) that underpinned its training materials, with consequences that no man may foresee ...
Perchance over time we may even find ourselves developing some level of insanity, since these underlying assumptions are not necessarily mutually coherent ...