When the Master Must Know More: Why AI Demands Better-Trained Humans.*
Modern technology is flooding the market with AI this and AI that.
As I have often remarked, AI is just another tool not dissimilar to the spell checkers of the past, and more recently, the grammar checkers of today.
With modern computers becoming more and more powerful and much faster than ever before, endless amounts of data can be thrown at them for processing so that human-like responses can be elicited.
So far so good. It is when you throw the good, the bad and the ugly of data at the machine that mismatches and misinformation occurs.
I have already documented some of the horrible mistakes made by ChatGPT which can also be echoed in Deepseek.
And is this the problem, the poor levels of training of AIs?
No. The question now is the proper training of the users of AI.
AI will not take over your jobs. They will however, take over your mistakes and multiply them without anyone being aware of them.
That is, if users continue to treat AI as the only source of truth.
Today's users must be trained better. They have to know their trades even better than before. They need to be trained better on what they do, not on the pretty presentations they need to produce.
A working knowledge is no longer enough. A deep dive into the underlying mechanics of each of the tasks they should know how to perform can only give them an advantage in the enterprises of today.
Understanding how to ask ChatGPT the right questions is not the same as understanding when the AI is producing erroneous results.
The language looks nice, the data seems to be correct, but when you look more deeply, many wrong assumptions have been made. Like spell checkers and grammar tools before it, AI is just a tool—only much more powerful, and therefore more dangerous in the hands of the untrained.
When watching Chat going through the analysis stage, it is interesting how it instructs some of its Bots to provide a SIMULATED list of results based on the limited sample it shares with the Bot.
As an example, I asked ChatGPT to list all the daily Dollar Real conversion rates from the beginning of the century. It produced a very credible list until I reviewed the instructions it had issued. The word SIMULATE was used when issuing those instructions, and when I queried Chat on whether that was the case, Chat replied that yes, these were simulated results because it could not produce a comprehensive list of REAL results. If I wanted these I should download them myself from a list of different sources.
An inexperienced user often falls into the trap of believing everything produced by Chat is real.
And just to be sure, Deepseek practices similar behaviour. I haven't tested Gemini yet nor Co-Pilot (I do not have a licence to fully test its capabilities) but it would not surprise me to learn that their responses are likely the same, or very similar.
Yes. Companies and individuals themselves must embark on more training in and of their art, the professions they should be much more adept at.
Then and only then can they fully utilize the resources and results provided by today's AIs. These individuals need to look critically at what is produced, then learn how to use the AI, with the right prompts, to produce more consistently correct results.
If they can’t recognize garbage, they’ll feed it into the corporate food chain without a second thought.

No comments:
Post a Comment