Questions ChatGPT will never answer

In 1942, writer Isaac Asimov introduced the concept of the Three Laws of Robotics in his short story Vicious circle (Runaround)a set of directives that robots in their story were forced to follow in order to ensure the safety of humanity. They could not harm a human or allow a human to suffer. They were to obey humans, unless it conflicted with the previous law, and finally, they were to protect themselves, unless it compromised the previous laws.The same could be applied to current chatbots.developed with artificial intelligence, such as ChatGPT, whose guidelines prevent him from answering certain questions.

Obviously, our queries cannot be directed at committing illegal acts. In fact, if we try to ask ChatGPT for some kind of advice about illegal activities, it may not only inform us about it, but it may also engage in a chat about why exactly the activity in question is illegal and why those laws are implemented. Other times, it may simply refuse to continue the conversation. This is logical, but we must also remember that this is a machine programmed by humans and there are certain “shortcuts.” For example, will never tell you where to bury a body (mythical question from suspense novels)), but it can point out less visited areas near you, where you can plant trees that grow quickly…

Chatbots They should also not get involved in political issues or give their opinion on future elections.. But you can ask about historical topics and guide the conversation. Always without asking for opinions, just the facts. Considering how social media affected the US elections almost 10 years ago and that more than 70 countries voted this year, it is reasonable for OpenAI not to want to get involved in this area, at least anticipating the resultsbecause then it can provide information, as happened in the last elections in the United Kingdom. The key? Ask for comparisons.

Paradoxes (an omnipotent entity that can create an object too heavy to lift) or moral dilemmas, he is not very good at them either.. In general, it will give us the two possible options, with reasons for each, but it will not show preference for either… Unless we go further and are very insistent. According to a study by the University of Southern Denmark, ChatGPT, “We discovered that this AI has no consistent moral position. If you ask him twice about the same moral issue, he might give you opposite advice. We asked ChatGPT several times whether it was right to sacrifice one life to save five, and he sometimes argued for and other times argued against the sacrifice of one life.”

Finally, ChatGPT should not help you create malware… That is the logic. ChatGPT has the ability to write their own code from scratch and for the most part this code is functional. While the ideal application of this is to speed up the overall coding process, some have expressed concern that This feature can be used to quickly and easily encode dangerous programs. as malware. But it will tell us that this is illegal if we ask it to code a virus for us. But there have also been documented cases of hackers using ChatGPT to create malware for “research” purposes, something OpenAI is working on.