New Chain-Of-Feedback Prompting Technique Spurs Answers And Steers Generative AI Away From AI Hallucinations

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 112 sec. here
  • 12 min. at publisher
  • 📊 Quality Score:
  • News: 79%
  • Publisher: 59%

Artificial Intelligence AI News

Large Language Models LLM,Generative AI,Chatgpt

Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 7.4+ million amassed views of his AI columns. As a CIO/CTO seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research.

In today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc.

You would readily do the same when interacting with fellow humans. In the case of using generative AI, you provide feedback during a problem-solving encounter and try to keep the respective steps heading in a suitable direction. It takes a bit of added effort on your part. Sometimes this extra effort will be worthwhile. Not all the time, but certainly some of the time.

Is there anything wrong or potentially off-putting about asking the generative AI to redo or reassess its initial answer? Turns out that you can easily stepwise make your way into a dismal abyss with generative AI. How deep does the hole go? You can keep repeating the redo and wait for the wildest of reactions, which some people do just for kicks. I’m not saying that a repeated series of repeated efforts will guarantee an outlandish response, but it can occur.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Similarly, if you put distracting words into your prompt, the odds are that the generative AI will pursue an unintended line of consideration.

“Recent studies, however, have shown that LLMs are prone to generate contradicting sentences or be distracted with irrelevant context, ultimately leading to hallucination.” In the empirical work by the researchers, they showcased that these computational pattern-matching shenanigans can occur. I’ll be showing you likewise examples in a moment when I show you the use of ChatGPT as a means of illustrating what can happen.

I’d like to note that their above approach indicates that you are to use a different generative AI to try and solve a step in a being-solved problem that seems to have gone awry by the AI . “Similar to the CoF setting, R-CoF takes in a multistep reasoning question. Then, it outputs a response and the reasoning steps it took to reach the final answer. Given that the initial response is incorrect, the LLM freezes all the correct steps, makes a recursive call to a separate LLM to adjust the incorrect step, and finally incorporates the correct reasoning in the process of refining its final answer.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in BUSİNESS

Business Business Latest News, Business Business Headlines