Digital Economy

OpenAi launches two open-weight models and invades the field of DeepSeek and Llama

They are called Gpt-oss-120B and gpt-oss-20B. They can be downloaded for free, trained and used directly with the hardware resources of a normal laptop computer

by Luca Tremolada

2' min read

2' min read

When DeepSeek, the open Chinese Ai model that made all of Wall Street jolt, appeared on the scene, Sam Altman in one of his tweets let slip that perhaps the choice of ChatGpt as a closed system was wrong. A few months later, the choice was to launch two models called gpt-oss-120B and gpt-oss-20B. These are open-weight reasoning models that OpenAi describes as the most advanced to date. They can be downloaded for free, trained and used directly with the hardware resources of a normal laptop. It is not ChatGpt 5, which will probably be released soon, but an artificial intelligence model whose weights (i.e. the parameters learned during training) are publicly available unlike ChatGpt, Gemini and Microsoft Copilot. The other feature is that it is a 'reasoning model' and therefore, to simplify, 'thinks before responding. The advantage is to offer high real performance at low cost. The challenge is to invade the field of Meta with Lllama, Google with Gemma and the Chinese Deepseek and Qwen.

How are they made?

.

As stated on the site they are available under the flexible Apache 2.0 licence, these models aim to outperform similarly sized open models in reasoning tasks, demonstrate strong tool-use capabilities and are optimised for efficient implementation on consumer hardware. They were trained using a mix of reinforcement learning and techniques informed by OpenAI's more advanced internal models, including o3 and other frontier systems.

Loading...

The gpt-oss-120b model achieves near parity with OpenAI o4-mini in basic reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model offers similar results to OpenAI o3-mini on common benchmarks and can run on edge devices with only 16 GB of memory. The latter in particular seems designed to run even with limited hardware resources available. In this way, it can theoretically also be used offline. This reduces barriers for developers, start-ups and organisations operating in emerging markets or high-security environments, allowing them to run and customise AI on their own infrastructure.

OpenAI is publishing new developer guides and tools to help teams perform fine-tuning responsibly, implement guardrails and integrate models with Hugging Face, vLLM, Ollama, llama.cpp and major GPU/accelerator platforms.

The open-weight models will be shared on Hugging Face and other platforms, with the technical report, security document, system board and developer guides to be available later today.

"Since its inception in 2015, OpenAI's mission has been to ensure that AGI provides benefits to all of humanity. 'With this in mind,' said Sam Altman, 'we are excited that the world can build on an open AI framework developed in the United States, founded on democratic values, available for free to all and for a broad.

Copyright reserved ©
  • Luca Tremolada

    Luca TremoladaGiornalista

    Luogo: Milano via Monte Rosa 91

    Lingue parlate: Inglese, Francese

    Argomenti: Tecnologia, scienza, finanza, startup, dati

    Premi: Premio Gabriele Lanfredini sull’informazione; Premio giornalistico State Street, categoria "Innovation"; DStars 2019, categoria journalism

Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti