posted: Fri, Jun 28, 2024 |
tagged: | electronics | ai | tech
return to blog home |
Reference Links:
AI - meaning Artificial Intelligence and not Apple Intelligence...
As an enthusiast of AI-powered projects, I'm a touch excited to share my hands-on experience with the Ollama engine, Llama3 model, and NVIDIA Jetson Orin Nano. In this post, I'll walk you through my journey of testing the capabilities of the model, observing the performance of the Nano, and being pleasantly surprised by its overall capabilities. I wouldn't call these performance observations benchmarks, and they are hardly scientific. But they are in fact, interesting.
To get started, I picked up a NVIDIA Jetson Orin Nano, which is a powerful and small form factor computer ideal for AI development, robotics and more. For those new to the Jetson line of products, take a look at the NVIDIA Jetson resources for a deeper dive.
I decided to test the capabilities of the Meta Llama3 model by running it using the Ollama engine running on the Jetson with Ubuntu Desktop (the unit ships with Ubuntu pre-installed). The Llama3 model is designed for computer vision tasks and leverages its GPU with its Cuda cores, which allows for accelerated processing. To my surprise, the performance of the Nano was impressive. The device handled the demanding AI computations seamlessly, showcasing the power of the Cuda cores. I was eager to see how well the Nano would perform compared to a more powerful machine like the M2 MacBook.
To conduct a non-scientific or reproducable comparison, I ran the same Llama3 via Ollama on an M2 MacBook Pro. What I observed about the performance of the MacBook is that the model runs purely CPU-bound, and I had expected it to outperform the Nano due to its more powerful processor.
However, the results were rather surprising. The Jetson Orin Nano performed remarkably well, rivaling the MacBook's performance in many instances. This was largely due to the Nano's ability to offload computational tasks to its GPU, which significantly improved processing times.
My experience with the tinkering with Ollama+Llama3 humming on the NVIDIA Jetson Orin Nano has been nothing short of impressive. The device's capabilities, combined with the power of Cuda cores, make it an ideal platform for AI development. I'm thrilled to have discovered that the Nano can hold its own , in it's own special ways (not all ways mind you) against more powerful machines like the M2 MacBook.
If you're interested in exploring the world of AI and computer vision on the Jetson Orin Nano, hit the links provided above for more information or to pick up a Nano for yourself.
My experience tinkering with Ollama, Llama3, and the NVIDIA Jetson Orin Nano has been nothing short of exciting. The device's capabilities, combined with its affordability and ease of use, make it an excellent choice for anyone looking to dip their toes into the world of AI development. The Jetson is a relatively affordable way to get introduced into this exciting world, running your own models on your own infrastructure decoupled from the cloud. And while it isn't performant on the order of magnitude of ChatGPT 4o, it is impressive on it's own.
I asked Llama3 what the model is designed to do well, and how the model is tuned. Here are the results of my prompt. I was rather surprised by some of the intended use cases, assuming the below is accurate.
The Llama3 model is a pre-trained language model that has been fine-tuned on a specific task, which is text-to-image synthesis. This means it's well-suited for tasks that involve generating images from natural language descriptions.
Some examples of work the Llama3 model can be used for include:
The Llama3 model is particularly well-suited for tasks that involve generating images from text descriptions in a specific domain or niche. For example:
Keep in mind that while the Llama3 model is a powerful tool, it's not a magic bullet that can solve all your image generation needs. It may require additional fine-tuning or adaptation to specific tasks or domains.