Guides

When a gaming laptop becomes an artificial intelligence laboratory

We chose the HP Omen Gaming 17 to experiment with some open-weight Ai models. Here's how it went.

by Luca Tremolada

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

Gaming laptops, or rather laptops for playing computer games, can also be useful for 'playing' with artificial intelligence. We are talking about machines designed to support intense graphic loads, with dedicated GPUs (GeForce RTX, Radeon RX), high refresh screens, power compressed into a few centimetres of thickness, and cooling systems that look like small turbines. We chose for our test the Omen HP Gaming 17, a machine that mounts a good graphics card such as the NVIDIA GeForce RTX 4060 8G, 32 GB of DDR5 RAM at 5600 MHz and a hard disk, the 1 TB PCIe Gen4 SSD. With this equipment, the system avoids forced compression, maintains controlled latencies and offers a useful base even for those recording, editing or modelling content alongside gaming. So it is a gamer's laptop very close to high-end but it is not (nor does it claim to be) the absolute top of the line. Prices for this series start at EUR 1299; in particular, the model under test can be found online from EUR 1350 upwards.

We put them to the test with open weight LLMs (large language models) that you can download and try out on a laptop. We are talking about AI models whose weights (model parameters) have been made public and can therefore be run, inspected, modified and retrained by anyone. You can use them with no cloud, no consumer APIs and no privacy issues. So we downloaded OpenAI's gpt-oss in the 120B variant (the largest) but also Google's Gemma, the lighter 7 billion parameter model, and the very popular Chinese Qwen model. I also used Nvidia's ChatRTX, which is used to run generative models locally, directly on your GPU.

Loading...

The machine performed well. You can feel the workloads as when you play particularly demanding video games. The technical specification to look out for is a GPU that is up to the task. On a typical gaming laptop, a mobile RTX 4060 or 4070 already allows you to run 7 billion parameter models with smooth responses, without resorting to tricks. With a bit more video memory, 12 or 16 gigabytes, the 13 or 20 billion model range also comes into the picture. It is a cut that paves the way for more comprehensive assistants, chatbots that read local documents, systems that summarise long texts or respond with a minimum of reasoning.

Next to the GPU, RAM does a silent but decisive job. The 32 gigabytes that we find in high-end laptops today are not a fad: they are the breath of the machine. They serve to give models breath, to support quantisation, the technique that compresses the model so that it occupies less memory without losing too much quality. And to hold everything else together: operating system, open applications, tab-filled browsers.

Then there is space: a fast PCIe SSD is an implicit requirement. The models are not small. Even when compressed they weigh gigabytes, and a slow disk would be like threading a Formula 1 track through rush-hour traffic on the ring road. Last point: in the test we used Ollama, which is one of today's software for managing AI locally. ChatRTX, Nvidia's proposal, adds a layer of optimisation to squeeze even more out of RTX GPUs. So good for AI, as long as you use it for experimentation or to automate some day-to-day tasks. If you have a business and want to use data and infrastructure at home, you need much, much more powerful machines.

Copyright reserved ©
  • Luca Tremolada

    Luca TremoladaGiornalista

    Luogo: Milano via Monte Rosa 91

    Lingue parlate: Inglese, Francese

    Argomenti: Tecnologia, scienza, finanza, startup, dati

    Premi: Premio Gabriele Lanfredini sull’informazione; Premio giornalistico State Street, categoria "Innovation"; DStars 2019, categoria journalism

Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti