No internet connection
  1. Home
  2. How to

How to utilize the new local LLM features?

By Brandon Jiaconia @Brandon_Jiaconia
    2024-07-10 18:44:15.782Z

    @Kitch Can you explain how to use the new local LLM features that were released?

    • 7 replies
    1. Kitch Membery @Kitch2024-07-10 19:23:52.938Z

      Thanks for posting here, @Brandon_Jiaconia,

      I'll get back to you on this ASAP.

      1. Kitch Membery @Kitch2024-07-10 19:45:20.539Z

        Thanks again for asking... Here's a video on how to use the LLM feature. :-)

        https://www.loom.com/share/6362befc0e434ddb83a697198e17de57?sid=188c9f44-15ff-44a4-9dd6-908b1e946759

        1. Kitch Membery @Kitch2024-07-10 20:11:36.077Z

          Here is the script. :-)

          let llmResponse = sf.ai.createCompletion({
              model:"llama3",
              prompt: "What is the best sample rate for recording?",
              showProgress: true,
          }).response
          
          log(llmResponse);
          
          1. M
            Matt Friedman @Matt_Friedman
              2024-07-10 20:45:39.315Z

              What are some real world use cases for this though? Surely more than just to do what you can do in ChatGPT right?

              For example, can I ask it to write a Soundflow script for me that will do x, y and z?

              1. Hi Matt,

                The point of having access to a local LLM via scripting is to build it into other scripts, so that they dynamically build the prompt and potentially parse the output. I'll be making some videos about using this in context soon.

                If you're impatient, you can google things like AI Agents and LangChain for inspiration as to how this is being used in the wider industry.

                Think of a local LLM as a way you can script what you're asking the LLM - and then use that response in more scripts. Which can then talk to the LLM again. This way, you can build entirely dynamic "agents" that can perform quite complex tasks.

                Adding this core building block is just the beginning. There's a ton to explore here.

                1. One real-world use case I have as a prototype is a "Gmail Organizer" which reads my Gmail inbox, and based on all of the thread subjects, decides which of them are important, and then opens those in new tabs.

                  Another real-world use case is the Voice Commands feature, which makes use of the local LLM to parse what you said into a list of commands, which are then executed. Allows you to use natural language to describe series of actions.

                  Both of these use a combination of regular SoundFlow scripting, then feeds some JSON input into the LLM along with natural language (English) instructions, and then receive some output in JSON format, which is then parsed and used in the scripts' ongoing execution.

                  Asking it to build SoundFlow scripts is currently out of scope, but absolutely something we'd like to be able to do.

                  1. Building agents that do things like "search google for something" then summarize the results, or read the individual web pages to find the most relevant one, etc. etc. - could all be built using the new Chrome integration combined with the local LLM feature.

                    While we now have the most core building blocks, we hope to add more "easy to use" wrappers around some of the more high-level functions I just discussed.