Monday 8 February 2016

Soft Machines claims its cutting-edge VISC CPU cores can outperform Intel, ARM in performance per watt


More than a year ago, we coveredSoft Machines VISC (Variable Instruction Set Computing) and the company’s long-term goal to improve efficiency. VISC’s argument is that by creating a middleware software layer that can translate single-threaded code into parallel workloads that are executed by multiple virtual cores, it can improve overall execution efficiency and reduce power consumption. Or at least, that’s been the claim.
Soft Machines has now revealed more performance data on how it expects its first VISC core, Shasta, to perform, as well as information on the upcoming Shasta+ and Tahoe CPUs.
The first Shasta core will be available this year, with 1-2 virtual cores on a dual-core configuration, or an SMP block of 2-4 VCs with a quad-core configuration. The CPU has a 64-bit ISA and should be clocked at 2GHz. By 2017 Shasta+ will move to 10nm with support for more virtual core instances, followed by a new architecture, Tahoe, in 2016.
This graph captures much of what Soft Machines believes makes its hardware appealing. The company is basically arguing that by virtualizing CPU resources and breaking even single-threaded workloads into pieces that can be spread to different cores (with hypothetically different resources and capabilities) it can realize greater efficiencies than CPU architectures that rely on dynamic frequency and voltage scaling (DFVS).
The big question to answer, I think, is how much of an overhead penalty SoftMachines pays for its virtualization, and what kinds of workloads it can effectively execute on its cores. SPEC is a decent cross-platform benchmark, but it’s also susceptible to hand-tuning and careful optimization. SoftMachines’ documentation states that the same GCC 4.9 settings were used for all processors, but SPEC runs aren’t the same as commercial software deployments.
Now the Shasta results being shown here are simulated, but again, SoftMachines claims to be using the same model they adopted for simulating the performance of their proof-of-concept 28nm core. The simulation method proved accurate for that chip, within 5% on performance and 10% on power. In theory, therefore, the Shasta, Shasta+, and Tahoe results should match as well.


We see plenty of CPU announcements come and go in the journalism business, but Soft Machines has been flying largely under the radar since 2014. They’ve made a few additional announcements, but most of the company’s efforts have apparently been on improving its products as opposed to its media profile. I’m genuinely curious to see if their virtualization approach can actually yield benefits in real-world scenarios, particularly given the difficulty that companies like Intel have had with increasing overall performance. Breaking workloads up dynamically and executing them across virtual “cores” could be more power-efficient than scaling single cores up and down by clock speed, but demonstrating that efficiency in real-world tests will still take some additional work.
Since Soft Machines doesn’t build its own CPUs or SoCs, we’ll have to wait for partner silicon to come to market before we can draw firmer conclusions about whether this approach can improve performance.

No comments:

Post a Comment