Closed systems stagnate innovation—Linux users know this. Licenses, royalties, and fees keep the well-funded in control. RISC-V throws that out the window because it’s free to adopt, adapt, and innovate. It feels like the beginning of something big.
A Free (as in Freedom) ISA
RISC-V (pronounced “risk five”) is an instruction set architecture (ISA) that builds on a series of ISAs developed in the 1980s known as Berkeley RISC. During that time, the Berkeley team created four generations of RISC architectures, and in 2010, they started the development of RISC-V.
But what is it? Is it a CPU or something else? It’s something else. An ISA is the blueprint for a CPU architecture, and it’s not physical but instead a specification. The RISC-V ISA is an open standard, and its specification is available under a CC BY 4.0 license. Under that license, anyone can use and modify the specification for any purpose, including commercial, and they only need to give a nod to the copyright holders: RISC-V International.

What Is RISC, and How Does It Compare to x86?
RISC differs fundamentally from another common ISA approach called CISC:
RISC differs from CISC in how it executes commands (aka instructions). RISC uses small instructions that finish in one clock cycle. CISC uses bigger instructions that take several clock cycles to finish. For example, to multiply two numbers, RISC performs each step separately—loading the numbers, multiplying them, then storing the result in main memory—each in a separate clock cycle. On the other hand, CISC does all of those steps in one instruction, but over multiple clock cycles.
The result is that RISC moves processing complexity up into the program. CISC does more on-chip; RISC does more in the program. For RISC, it means a larger program and more memory usage. However, it crucially means a smaller ISA and consequently fewer transistors on the CPU. Fewer transistors mean more space for other resources, like registers (fast storage) and parallel processing (multicore). As a result, RISC-based CPUs consume less power and have the potential to be more performant than CISC. This is why RISC architectures are popular for mobile devices (e.g., ARM).

In short: RISC uses fewer transistors but more main system memory. It’s more power efficient than CISC and potentially more performant.
It’s for Developers, Geeks, and Hardware Vendors
Small RISC-V hardware vendors are innovating somewhat of a hackerspace revolution at the minute, in a similar vein as the Raspberry Pi did. RISC-V may not offer anything to the everyday person, but to appeal to developers, small RISC-V vendors are building development kits packed with tons of features, like NPUs (neural processing units), GPUs, and boards no bigger than a credit card. Development boards are really for anyone who’s interested, not just developers. We havecoverage on the Orange Pi RV2, which is one such board.
Debian, Fedora, Arch Linux, etc., all provide images for RISC-V in one form or another. Debian, for example, has compiled ~98% of their packages to support RISC-V. However, it’s not all plain sailing, because images are typically built with specific boards in mind, and some require technical effort to make them work. I am 100% sure that this will change over time; it’s still early. So, if you intend to venture into the RISC-V world, research the boards that you buy and the distro that you use. If you want, you may read ’s support for RISC-V processors.
VisionFive2 RISC-V SBC Starter Kit
This single board computer kit features the StarFive JH7110, an open source RISC-V U74 quad-core processor. It also has a GPU that can kit 600Mhz for graphics processing, and multiple ports and pins for connectivity and experimenting.
The biggest impact will be on hardware vendors. With a licensing model that asks for nothing (except for attribution on documentation), hardware vendors are free to build upon, extend, and sell innovative hardware. Large organizations (like Google or Microsoft) can extend the ISA and produce their own specialist hardware to meet their needs. It’s this freedom that will see RISC-V increase in adoption, just like it did for Linux in the server space.
The hardware strategy of organizations like Google doesn’t affect geeks like you and me. However, I think in the coming years, smaller hardware vendors will align their products to fill niche demand. As time rolls on and vendors become more numerous, markets will shrink, fabrication will become easier, and we will see more and more niche products with specialist hardware. This is great for Linux geeks, because we will have a first-class seat and the know-how to develop our own cool gadgets.
Projects That Won’t Break the Bank
My mind is racing with ideas; below are some projects that I wouldloveto take on.
I have not tested these, and they are just ideas, not recommendations. You must do your own thorough research before embarking on any project.
A Custom Travel Router
The Muse Pi offers an 8-core CPU, miniPCIe, and 64-128 GB of eMMC flash. It’s possible to connect a Wi-Fi card to the miniPCIe slot and run it in AP-client mode (like a bridge, except with NAT). With 8-16 GB of RAM, there’s plenty of room to run other software, like a network intrusion detection system (NIDS).
A Custom AI Compute Cluster
Okay, this one may break the bank.
The Milk-V Cluster 08 will be a cluster mainboard that has 8 expansion slots. It’s designed to accompany 8 SoMs. A SoM is a complete system on a single expansion card. That means 8 full systems in each slot. The Cluster 08 board was designed to accommodate 8 of the yet unreleased Milk-V Megrez Nx SoMs. Each module will pack up to 32 GB of RAM and an NPU that can reach 19.5 TOPS (tera operations per second) with INT8 precision. If we put all 8 modules together, we get 159 TOPS and 256 GB of RAM. RAM is very important for LLM inference, and 159 trillion INT8 operations is not too far off high-end graphics cards.
It’s possible that a complete system could run an INT8 quantized LLM that’s up to 256 GB in size. A quantized model just means that the model weights were converted from F32 (floating point) to INT8 (integer) and run more efficiently on devices other than a GPU. You will find many such models onHugging Face.
A RISC-V Laptop
A full-fledged RISC-V laptop may be years away, but thanks to Framework (the company), laptops are now more modular than ever. With the DC-ROMA AI PC, RISC-V Mainboard II from DeepComputing, we can get access to an 8-core RISC-V CPU and a 40 TOPS NPU (INT8).
It’s possible that the performance may not be what you expect, so do your own research.
A Games Console
If you haven’t yet got a Steam Deck, and you’re looking for a project, read our article onplaying Steam games on RISC-V processors. Alternatively, because RISC-V-based boards are just mini PCs, you could also create your own retro games console.
RISC-V is still far-off from mainstream desktop and mobile adoption. However, we do see its presence already in some onboard devices like cameras. While it’s great for passion projects, it’s still very much in development, and it will be years before it’s ready.
The Milk-V compute cluster may not be ready yet, and if you don’t want to wait for it, a popular choice for running local LLMs is to use twoRTX 3090 24 GB cards. These beasts achieve 284 TOPS for INT8 and 568 TOPS for INT4—each! Meaning, running two will double those numbers. If you intend to use very large models, it’s important to get as much VRAM as you can get. Such a setup cannot run everything, but it can go pretty far.