GPT2 Unlimited-Length Generation with Hidden Prompt Injections – Code Review

Unlimited-Length Imagination Directed GPT2 Chained Generation by Overlapping Prompt-Injections. The same idea can be applied for any similar generative model with a prompt for producing more creative text and for changing the topic in a directed manner, which makes the text more interesting and original and less monotonous.

Created in June-July 2021 while training GPT2-Medium on Colab.
Published on 27.1.2023

Author: Todor Arnaudov – "The Universal Man" from "Sacred Computer", a.k.a. "Artificial Mind" – one of the oldest research institutes in Artificial General Intelligence (although a virtual one) with pioneering publications in AGI and Transhumanism back in 2001 (and earlier works even from 1999). Todor is the creator of the world's first interdisciplinary University course in AGI back in 2010 at Plovdiv University, Bulgaria – watch a video about it in the channel.

I'm looking for partners for my AGI Research Institute and projects and for projects to collaborate with. Check my future AGI infrastructure project, currently a work-in-progress research project which is under research and design: "Jack of all trades":

#gpt2 #generation #machinelearning

* A Longer Title: Unlimited-Length Imagination Directed GPT2 Chained Generation by Overlapping Prompts-Injection and removing the injected beginning of the following generated sequence

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon Affiliate Disclaimer

Amazon Affiliate Disclaimer

“As an Amazon Associate I earn from qualifying purchases.”

Learn more about the Amazon Affiliate Program