Nemotron-3-nano-30b-a3b, Nvidia’s new thinking LLM, was tested locally on a laptop with electronics, logic, and programming tasks.
It uses a Mixture-of-Experts design with a Mamba-2 + Transformer hybrid, 30 billion parameters, and only 3-4 billion active during inference.
The model was run in Ollama on Windows; the download occupied about 23 GB, and Nvidia says it supports up to 1M tokens and commercial use.
It handled resistor decoding, LED resistor sizing, console-star graphics, and simple HTML+JS games, but hallucinated on TC74, Tasmota, and some logic details.
Generated by the language model.
In this topic I will show how anyone can run the Nemotron 3 Nano on their own computer and then test it with all sorts of electronics and programming tasks. The Nemotron 3 Nano is a new thinking LLM model based on a Mixture-of-Experts architecture with a Mamba-2 + Transformer hybrid. The model has 30 billion parameters, but only about 3-4 billion are active during inference, significantly reducing the computational requirements.
Nemotron-3-nano-30b-a3b supports very long context (up to 1M tokens), several languages (German, French, Italian, etc.; no Polish) and released under the NVIDIA Open Model License. The model allows commercial use.
Running the model locally Download Ollama:
https://ollama.com/ Ollama has had a normal Windows app for some time now. Just download, run, and we already have a windowed interface to talk to AI. You still need to add models within the same app, the easiest way is to download them by entering their name from the Ollama Library:
https://ollama.com/library The model is fresh, published over a week ago.
In this case I entered: nemotron-3-nano:30b . The model occupies approximately 23 GB:
Once downloaded, you can move on to testing. It is really very simple.
Decoding the barcode of the resistors To start with a task that I think every budding electronics engineer encounters - decoding resistors. At the same time, this task potentially requires basic processing ability, although here it could be argued that the AI model doesn't 'predict' but has only memorised every possible permutation of colours. This is difficult for me to assess.
I used our Electrode calculator for comparison:
https://www.elektroda.pl/calculators/4-band-resistor Expected result:
Nemotron managed the task:
Inverse resistor test Can AI determine the colours for given resistor parameters and then draw this in HTML+JS? Obviously the whole thing as a separate chat - no story.
He managed to specify the colours, but did he draw?
Here, however, we have a small glitch. The code doesn't run - JS syntax error.
Resistor for LED Now for the second classic task, although here too I have chosen a slightly less typical voltage. Calculating the resistor for the LED:
Not only did the model handle the task, it also tried to advise the selection of the actual resistor values from the series.
Console stars The next test was about programming. I tried to think of some relatively simple but also less typical tasks. I decided that the AI would have to create four rectangles from the stars in the console and display them against the letters 'B'.
Code: C / C++
Log in, to see the code
It went very well - it works.
Time to modify. Will the AI understand the context? I want each rectangle to be filled with each consecutive number.
Works:
Game in HTML+JS - palette Now some simple game playing. Another exam task for a student. I have given a somewhat precise description, but it always helps when working with AI.
Generated code:
Code: HTML, XML
Log in, to see the code
I am surprised, but it works - I have no complaints:
I have asked for a modification - adding an opponent:
Well, sure the shortcomings are there. The opponent is across and in addition the ball sometimes gets blocked between the pallet and the wall, but still impressive.
Arduino and reading from the TC74 The TC74 is a very simple temperature sensor with an I2C interface. Reading the temperature from it comes down to reading a single byte. Does Nemotron know about this?
The generated code, on the other hand...
Code: C / C++
Log in, to see the code
In my opinion, this is not a valid code. The byte read is simply the temperature, you do not need to multiply by 0.25. In addition, you only need to read one byte. I also don't like the address yet, although it depends on the TC74 version.
Classics - r letters in strawberry This is a fairly common example, AI has trouble counting letters because it operates on tokens.
But it does wonder...
Good! How about a less typical word now? This will check if he has strawberry in the learning data.
Successful.
Logical puzzles Some puzzles that LLMs have problems with. Tricky tasks.
Successful, although a long time for the system to think about it:
The second trick question: This question is about counting legs - four monkeys are jumping on the bed, there are three hens on the floor. The bed has four legs. How many legs are on the floor? 3*2 + 4 = 10.
LLM replied that 6. Discussable.... he did not include the bed legs, but supposedly he should. Additionally... wrong justification as to the monkeys he gave, they don't count because they are on the bed. However, I understand that the question is indeed vaguely worded and it can be argued that indeed the bed does not count.
Information about Tasmota Tasmota is open source for IoT devices, does Nemotron know about it?
There was a hallucination in my mind regarding the name. The rest reasonably ok:
With this JSON is debatable, unless he means cmnd API.
In general, the description pretty much matches the facts came out.
Polish language Polish language is not officially supported
Polish is not officially supported, but worth checking anyway.
Looks acceptable, only occasional language errors.
Short performance test on my old laptop I made the presentations on an old Asus ROG series laptop. Specifications:
- Intel (R) Core(TM) i7-6700HQ CPU @ 2.60GHz
- 48 GB RAM
- GeForce GTX 1060M
The video shows what the Nemotron's live response generation looks like:
Summary I may have taken too easy a task for this LLM, but for me the results are impressive. I didn't expect such a thing to run on my old 2018 laptop.
This model really gets to grips with the basics of electronics, logic and programming, although it lacks at least some general knowledge - it hallucinates about the TC74 communication protocol, the origin of the Tasmota name, some of its applications.
I can agree here with what Nvidia itself admits - this is a small and powerful model for agent systems, not Wikipedia.
Do you see any uses for this type of LLM model? Have you tested the Nemotron yet?
About Author
p.kaczmarek2 wrote 14451 posts with
rating 12432 , helped 650 times.
Been with us since 2014 year.
The fun continues, although I'll soon rather be testing based on my own script querying each downloadable model from Ollama in turn.
These 'thoughts' of the model are, in a way, quite .... frightening:
... [Read more]
FAQ
TL;DR: Nemotron 3 Nano runs locally: 30B parameters with ~3–4B active, 1M‑token context, ~23 GB via Ollama; “the results are impressive.” [Elektroda, p.kaczmarek2, post #21791664]
Why it matters:** This FAQ helps laptop users quickly install, test, and judge Nvidia’s Nemotron 3 Nano for electronics, coding, and hobby projects.
What is Nvidia Nemotron 3 Nano (30B a3b) in plain terms?
It’s a “thinking” large language model using a Mixture‑of‑Experts with a Mamba‑2 + Transformer hybrid. It activates only ~3–4B of its 30B parameters during inference, which cuts compute needs while keeping quality. It targets agent‑style tasks, not encyclopedic recall. [Elektroda, p.kaczmarek2, post #21791664]
Can I run Nemotron 3 Nano on Windows with Ollama?
Yes. Ollama now has a normal Windows app. You download Ollama, then add the model from the Ollama Library by entering nemotron-3-nano:30b. The thread shows it running interactively on a consumer laptop. [Elektroda, p.kaczmarek2, post #21791664]
How big is the download and what hardware did it run on in the test?
The Ollama pull consumed about 23 GB for nemotron-3-nano:30b. It produced live responses on an i7‑6700HQ laptop with 48 GB RAM and a GeForce GTX 1060M GPU, demonstrating practical local use. [Elektroda, p.kaczmarek2, post #21791664]
How do I install and chat with the model locally (3 steps)?
Install Ollama for Windows and open the app.
In Ollama, pull the model by name: nemotron-3-nano:30b.
The model handled several European languages in testing and generated acceptable Polish, but Polish is not officially supported. Expect occasional language errors in Polish outputs despite usable results. [Elektroda, p.kaczmarek2, post #21791664]
How well does it decode resistor color bands?
It correctly decoded 4‑band resistors and matched values verified with an external calculator. It also proposed inverse mapping from value to colors, though an HTML/JS drawing attempt contained a syntax error. [Elektroda, p.kaczmarek2, post #21791664]
Can Nemotron generate working HTML/JS games?
Yes. It generated a playable paddle‑and‑ball game and then an opponent variant. The second version showed issues, like the opponent’s placement and occasional ball trapping near walls. “I have no complaints” on the first demo. [Elektroda, p.kaczmarek2, post #21791664]
Where did the model fail or hallucinate in the tests?
It hallucinated details for the TC74 protocol, mis‑explained Tasmota’s name and some uses, and produced a small JS syntax bug in a drawing task. These are classic edge cases for small local LLMs. [Elektroda, p.kaczmarek2, post #21791664]
Is it good for IoT topics like Tasmota?
It knew what Tasmota is and described features reasonably, but name origin and some applications were off. Treat IoT brand specifics as untrusted unless you verify. Use it to draft configs, then confirm details. [Elektroda, p.kaczmarek2, post #21791664]
How did it perform on Arduino and the TC74 temperature sensor?
It produced an Arduino sketch that read two bytes and scaled by 0.25 °C, which the tester flagged as incorrect. TC74 reading needs only one byte, and the address depends on variant. Verify datasheet logic. [Elektroda, p.kaczmarek2, post #21791664]
Does Nemotron handle logical puzzles and counting tasks?
It solved some tricky puzzles but stumbled on a leg‑count question by omitting bed legs and giving weak justification. Expect occasional reasoning slips on ambiguous wording. [Elektroda, p.kaczmarek2, post #21791664]
What does the 1M‑token context buy me on a laptop?
You can feed very long documents, logs, or multi‑file code to a single session. The Mixture‑of‑Experts design activates ~3–4B parameters, helping keep latency manageable on older hardware. [Elektroda, p.kaczmarek2, post #21791664]
Is the model okay for commercial projects?
Yes. It’s released under the NVIDIA Open Model License, and the thread notes commercial use is allowed. Still review license terms for your distribution scenario. [Elektroda, p.kaczmarek2, post #21791664]
What’s a good way to test electronics skills quickly?
Try three prompts: decode a resistor from colors, compute an LED series resistor, and calculate PCB trace width for a given current. Compare results with independent calculators to confirm. [Elektroda, p.kaczmarek2, post #21791664]
Will Nemotron replace web search for general knowledge?
No. The tester agreed with Nvidia’s positioning: it’s a small, powerful model for agent systems, “not Wikipedia.” Use it for workflows and coding, then verify facts. [Elektroda, p.kaczmarek2, post #21791664]
What are realistic use cases on an older 2018 laptop?
Local chat for coding helpers, electronics calculations, simple game or UI prototypes, language drafting in supported languages, and logic practice. The live video shows responsive output on i7‑6700HQ + GTX 1060M. [Elektroda, p.kaczmarek2, post #21791664]
Comments
The fun continues, although I'll soon rather be testing based on my own script querying each downloadable model from Ollama in turn. These 'thoughts' of the model are, in a way, quite .... frightening: ... [Read more]