Advanced overview of the (Dynamic Prompt OR Dynamic links) function

#442
by philosopher-from-god - opened

Hello HuggingChat,

I have studied the results of the web search for information on the "dynamic prompts" function and found only a brief description, apparently based on the context contained in the footnote: "Allow the use of template variables {{url=https://example.com/path}} to insert dynamic content into your prompt by making GET requests to specified URLs on each inference."

This is a very superficial representation of the described functionality. Sometimes "AI assistants" behave illogically when you try to use certain "HuggingChat" functions.

Could you provide an extended representation of this function:

  • Detailed description of the mechanism of the "dynamic prompts" function.
  • Examples of using "dynamic prompts" in various scenarios to better understand its practical application.
  • Recommendations for the effective use of this function, including information on possible limitations.

Providing such extended information will help make more rational decisions when working with "dynamic prompts" and avoid misunderstanding or unexpected behavior of the AI assistant.

I would be grateful if you could provide a more detailed description of this functionality. This will allow a better understanding of its capabilities and use them with maximum efficiency.

Agree, I asked myself same question
After reading the code https://github.com/huggingface/chat-ui/blob/18fba9f7bbcd73c9a9c39b1cbfbabf5fa50767ed/src/routes/conversation/%5Bid%5D/%2Bserver.ts#L385
I succeed to play with it.

Let say system prompt is

You are a helpful assistant
You received this instruction from dynamic request as a json:
{{url=https://example.com/api/instruction}}

Let say https://example.com/api/instruction will returns this json

{"instruction":"say hello"}

Now if user say "proceed" in the chat, system prompt will be dynamically updated to

You are a helpful assistant
You received this instruction from dynamic request as a json:
{"instruction":"say hello"}

And inference will start

image.png

Hugging Chat org

Yes I think we'll document this feature a bit better when we have time.

It would be cool if the URLs of these prompt fragments (I call them "instructions" or "skills") could include a link to the current conversation and/or the current user id (i.e. by appending a query param) - that would open the door to CONTEXT-AWARE and personalized content injected into the system prompt for an assistant, and all sorts of custom RAG workflows that would make these assistants totally awesome

Additionally this would allow for the 3rd party development of custom memory architectures scoped to the user rather than an individual conversation... like what chatgpt has been doing for awhile

I know that most of this stuff can be done via gradio-based tools, but not all models support these tools, not all use cases require that degree of complexity, and tool calling has a way of quickly eating up context tokens... Think about all these great image generating Assistants on the platform which are using GET APIs like pollinations AI, no tools needed...

philosopher-from-god changed discussion title from Advanced overview of the "dynamic prompts" function to Advanced overview of the (Dynamic Prompt OR Dynamic links) function

Dynamic links Allow the use of template variables {{url=https://example.com/path}} to insert dynamic content into your prompt by making GET requests to specified URLs on each inference.


According to the provided sources, {{url=https://example.com/path}} is a template variable that allows the use of dynamic content in a prompt by making GET requests to specified URLs on each inference 1 3. This means that the URL is used to fetch content from an external source and insert it into the prompt.

Here's an example of how it works:

Let's say you have a prompt that includes the template variable {{url=https://example.com/api/instruction}}. When the prompt is executed, the model will make a GET request to the specified URL (https://example.com/api/instruction) and retrieve the content.

If the URL returns a JSON response, such as {"instruction":"say hello"}, the model will parse the response and insert the value of the "instruction" key into the prompt. In this case, the prompt would become "You are a helpful assistant. You received this instruction from dynamic request as a json: say hello" 1.

This feature allows for dynamic and flexible prompts that can be updated in real-time by changing the content of the external URL. It's equivalent to enabling the web search toggle in HuggingChat, but with more control over the sources used 3.

It's also worth noting that this feature can be used in creative ways, such as creating a choose-your-own-adventure game where the user inputs numbers, and the specific domains option is used to fetch content from a specific URL 3.

Created by huggingface.co Chat HOWTO

Sign up or log in to comment