Running AI workloads with sensitive data in the public cloud has its disadvantages, but a new generation of laptops can now perform many of the same tasks locally, in smaller form, and at a much lower cost.

CPUs have had several hardware add-ons to help with the hard work behind various computational workloads since personal computers emerged in the late 1970s. Intel is now advancing this effort with a dedicated AI accelerator called a neural processing unit (NPU), which is ideal for power-efficient, sustained AI workloads.

The Intel® Core™ Ultra processors that power Intel’s AI PCs allow AI workloads to be offloaded from the cloud to the PC itself, reducing latency and cost while keeping sensitive data private by ensuring that it never leaves your PC. Also, this doesn’t affect CPU performance or drain battery life as much, or slow down the computer while bits are being crunched.

That’s a good thing because AI workloads often run very differently than the day-to-day workloads of web browsers, word processors, spreadsheets, and the like. NPUs are optimized to have tons of parallel processing power and can bring that to bear on the large number of mathematical operations needed for the calculations that AI needs. They do not have to be super precise, but they need to be fast, and to run in the background without sucking your battery dry.

To improve efficiency and extend battery life, Intel already has been offloading processes to separate cores such as P-cores (performance cores) and E-cores (efficiency cores), depending upon processing need. And for some uses, like image processing, integrated graphics processing units (GPUs) handle display activities, since GPUs are highly optimized for those workloads and not for the things a CPU focuses on. It’s far more efficient to have a GPU just crank through images and not disturb the CPU, which can then focus on your other critical tasks.

Check out this blog to learn more about ESET’s ongoing collaboration with Intel and how leveraging AI PC architecture can help ESET improve cybersecurity and user experience.

If you’re trying to improve native device security with AI, your life will probably get a lot easier. The ability to offload to the NPU things like looking for phishing email attempts, then report the results back to the system’s security software will speed related detections significantly.

Turning to hardware for improved malware detection is all the rage, as the usefulness of things originally intended for simple system hardware health is extended to perform operations like helping to detect malware at a very low level and handing the results of AI processing to security software to kill malicious code.

Years ago, we watched as hardware solutions for security and privacy got its own dedicated hardware, including secure external drives, expansion cards for offloading web traffic encryption, cryptographic processors like TPM chips for securing our PCs’ secrets, and so on. These days, hardware and software bundles include cutting-edge capabilities developed through Intel’s collaboration with ESET, notably Intel® Threat Detection Technology and hybrid processor architecture.

Expect that trend to continue. As the integration of dedicated GPUs and NPUs progresses, your laptop’s processor will be able to handle much more sophisticated AI workloads seamlessly in the background, even without internet access, and possibly be able to make your AI-powered laptop seem even more creepy-smart.

While the use of dedicated AI hardware in PCs is still in its infancy, the possible uses in cybersecurity are immense. Embedded NPUs are just the beginning, and perhaps we will see other hardware-level improvements in processors that security software can leverage in the future, not just to improve performance, but to also keep computers more secure while doing the heavy lifting of AI workloads.