Every company wants to use ai nowadays and whilst there are many benefits, most people don't actually understand how unreliable they can be. Language models such as ChatGpt, analyze the input text and based on that, generate a response. They don't actually look at the specific words, they just assign values to every word.
Thus, a simple question such as, "How many rs are in the word strawberry?" will give a wrong result.
The model also likes to be confidently wrong. The output for ChatpGPT 4o looks like:
How many rs in the word strawberry?
-The word "strawberry" contains two "r" letters.
Where are the rs?
-In the word "strawberry," the "r" letters are located at the 5th and 8th positions:
1. S
2. T
3. R
4. A
5. W
6. B
7. E
8. R
9. R
0. Y
So, the "r" letters are the 5th and 8th characters in the word.
-----------------
And no, I did not mistype. The last letter was assigned a 0 instead of a 10.
This chat model also loves to make up fake quotes to make text more believable. Even specifically stating "DO NOT MAKE UP OR USE FAKE QUOTES," it will still often chooses to do so.
At the current state of development, at most, they can be used as an assistant tool. They are nowhere near capable enough to actually run independently.