Computer

ChatGPT, what happens if we are nice to the chatbot

ChatGPT it is one of the most revolutionary technologies of recent decades, for this reason the many tests carried out by users and experts are not surprising artificial intelligence to understand how far this can go chatbot.

The latest one concerns a rather curious idea that has monopolized the attention of the web in recent weeks. The idea in question starts from the assumption that if yes speak kindly a ChatGPT the answers provided may be better than those formulated in a neutral or, worse, “aggressive” manner.

Many have already tried to interface with the chatbot in this way and the result of this “experiment” could give developers a lot of ideas. Let’s find out more about it.

Eat if you talk to ChatGPT

Before getting to the heart of this test we need to make a premise: first of all, we must not neglect the randomness in the answers since ChatGPT has been trained not to give exactly the same answer twice.

This means that the chatbot could change the shape e the exposure of his answers, obviously keeping the final meaning unchanged.

Secondly, it must be emphasized that these experiments they do not start from a data analysis certain but they refer exclusively to an idea of ​​some users, who have tried to put it into practice with their own account.

In any case, according to what the testers noticed, it seems that by formulating a question in a polite manner, ChatGPT put more effort into providing the answer, sorting the text more precisely and also opting for one slightly more complex formattingwith bold and so on.

However, by formulating the question in a neutral way, the chatbot responded in a simpler way and without too many particularities

Many have also noticed a more friendly “tone of voice”.. It is not clear whether artificial intelligence is really able to be friendlier with the kinder users or is it just a reflection due to the way in which the question was formulated, as a kind of “form of adaptation” to the tones used by the user.

The test results

As mentioned at the beginning, this test it is not supported by data of any kind and remains only an attempt by users who wanted to follow “a sensation” to see where it would take them.

Despite this, from the tests carried out many had the feeling that the chatbot gave better answers to questions formulated in a more polite manner.

The value of the answer is not doubted and the correctness of the information is the same. However speaking to the AI ​​in a more polite manner some have been noticed small improvements which made the formulated texts more complete both in terms of content and formatting.

At the moment OpenAI he gave no indications in this regard but, as anticipated, greater kindness in terms such as “please” could influence the algorithmpushing it towards a friendlier tone of voicein line with the request made.

This is currently the most accredited hypothesis although, on the other hand, there are those who maintain that use a little kindness and educationin general and not only when talking to an artificial intelligence, is not necessarily a bad thing and, indeed, it should be a normal thing.

Therefore this experiment, although devoid of any scientific basis, was interpreted by many as a sign and as a stimulus for users to always be kindeven when talking to a smart assistant who may not be as cold and unfeeling as you think and towards whom it is still wrong to vent your anger.

To find out more: ChatGPT, what it is, how it works, what it is for, how to use it for free

Leave a Reply

Your email address will not be published. Required fields are marked *