Can AI take over our research?

By now, many of you are becoming more aware of the advancement of AI (Artificial Intelligence) into the fabric of our society. Some of you may have also become aware over the last month of a “chatbot” known as “ChatGPT”. “ChatGPT is an advanced AI chatbot trained by OpenAI which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT relies on the powerful GPT-3.5 technology. GPT stands for Generative Pre-Trained transformer, a complex neural network based on the revolutionary Attention concept.” (chatgptonline.net.) Okay, we may have lost a few of you so far, but just wait. Let’s confuse even more of you. So, what is the “Attention concept”. The above quote sent us to another site for the definition. Hang on to your hats. Here it goes. “The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism.” (arXiv.org; Cornell University; Computer Science – Computation and Language; revised December 6, 2017.) The article goes on, but unless you are an IT expert, you are probably already confused enough. But let’s confuse you even more. “ChatGPT is fine-tuned from a model in the GPT3.5 series, which finished training in early 2022 . . . [and} were trained on an Azure AI supercomputing infrastructure.” Those who developed this AI also share some of its limitations. “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautions causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.” (chatgptonline.net.)

We consider ourselves at FREEDOMGOFORTH.COM to be relative novices, neophytes, to the developing area of Chatbots, and how AI is developed in support of these tools. We do understand that as more information is fed into AI, the more data and source material is available from which to draw its conclusions. As it is in all things starting with the beginning of the computer age, “Garbage In – Garbage Out“. The accuracy of AI is therefore dependent upon the validity of source material, and potential bias of those providing the source material. AI can be set up to accept or reject source material by a human generated algorithm. Therefore, anything produced through these Chatbots needs to be taken with the proverbial “grain of salt”. As it is with all good research, it should be backed up with multiple sources to be able to determine validity.

We were very curious about “ChatGPT”, so we decided to try it with a couple of questions to see what it might say in response. Quite frankly, we were shocked at the results. Not only was the information provided quickly, in about one minute, but the response was well organized and thought provoking. Below, you will see the question we asked, and in the yellow boxes is ChatGPT‘s response.

This was our question:

500-word essay as to why morality is taking a back seat to sociopathy.

Here is ChatGPT’s response:

“In recent years, it seems that morality is taking a back seat to sociopathy in many aspects of our society. Sociopathy, which is characterized by a lack of empathy and disregard for others, is becoming more prevalent and accepted, while morality and ethical behavior seem to be losing ground. There are several reasons why this might be happening.

Firstly, the rise of social media and online communication has created an environment in which people feel more anonymous and less accountable for their actions. This can lead to a lack of empathy and increased aggression, as people are more likely to say and do things they wouldn’t do in person. Social media also creates a “cancel culture” in which people are quick to judge and condemn others without knowing the full story, leading to a lack of forgiveness and empathy.

Secondly, the increasing focus on individualism and self-interest in our society can lead to a lack of concern for others. When people are encouraged to focus solely on their own needs and desires, they may be less likely to consider the impact of their actions on others. This can lead to selfish and unethical behavior, as people prioritize their own interests over the common good.

Thirdly, the dehumanization of others has become more widespread in our society. This can occur when people view others as mere objects or statistics, rather than as complex human beings with their own thoughts, feelings, and experiences. This can lead to a lack of empathy and a disregard for the well-being of others.

Finally, the influence of popular culture and media has led to a normalization of sociopathic behavior. In movies, TV shows, and video games, sociopathic behavior is often glorified and celebrated, leading young people to view it as desirable and cool. This can lead to a desensitization to violence and a lack of empathy for others.

In conclusion, there are many reasons why sociopathy may be taking a back seat to morality in our society. The rise of social media, the focus on individualism and self-interest, the dehumanization of others, and the influence of popular culture and media all contribute to this phenomenon. It is important that we recognize and address these issues in order to promote a more compassionate and empathetic society. This can be achieved through education, increased awareness, and a focus on promoting empathy and ethical behavior in all aspects of our lives. By prioritizing morality and empathy, we can create a better and more just world for all.”

All of what was in the six paragraphs above was written entirely by ChatGPT. Some 421 words, under our requested 500-word thesis. The only glaring flaw we noted was in the first sentence of the last paragraph where it flipped sociopathy and morality, in direct conflict with the first sentence of the first paragraph. Why, we have no idea, but it is only proof that ChatGPT needs a proof-reader, and that whatever it produces needs to be checked and double checked. Within the next couple of weeks, we will reveal ChatGPT‘s response to our other question. We do not intend to use this AI to develop all of our blogs. We want to be able to trust the multiple sources we use, and we constantly do our best to provide accurate information that is truthful. Our goal is to always encourage our readers to fight for our FREEDOM and LIBERTY, even in the realm of the world of AI. We must be vigilant to always seek out the truth and be on guard to not naively believe everything that we read or hear, as the source can be from a biased and flawed AI system.