📚 Hugging Face is the home of Open Source large language models.
💻 Learn how to download a model from Hugging Face to your machine.
🔑 Generate a Hugging Face key and set it as an environment variable.
📥 Choose a model to download, preferably one with fewer parameters.
💡 Running a Hugging Face LLM on consumer Hardware with parameters at 3 billion.
💻 Downloading the necessary files, including pytorch and configuration files, to run the model.
🔽 Using the HF hub download function to download and store the files in the cache folder.
🔍 We check the connectivity of our machine and toggle the Wi-Fi off to ensure it is disconnected from the internet.
💻 We initialize the model by importing classes from the Transformers library and creating a tokenizer and a model.
📝 Lastly, we create a pipeline for text generation after setting the model type as 'text to text generation'.
👍 Running a Hugging Face LLM on your laptop
🕵️♂️ Performing a HTTP head request to check for the latest version
💡 Using the Hugging Face LLM to answer questions about competitors to Apache Kafka
🔍 Exploring the possibility of using the Hugging Face LLM with custom data
📚 Running a Hugging Face LLM on your laptop.
💡 Generating summaries using different types of data.
🎥 Related video on getting a consistent JSON response when using Open AI.