The skills needed to scale AI: roles, methods and organisational culture
Skills, governance and collaboration are key to turning AI into a competitive advantage
by Giovanni Pirola*
In the process of adopting solutions based on Generative AI, the real differentiator is not so much the model (or LLM), but the ability of an organisation to use it in production in a secure and scalable way. Roles and methods are clear: AI platform engineering defines the right infrastructure to serve workloads in a flexible manner, open standards to provide freedom of choice, and leadership capable of governing the transformation process. The maturity of any technology adoption path is measured by ROI, solution deployment time and security policy, not by the sheer number of experiments performed. A structured approach, moving from proof-of-concept to production, should foster collaboration and help break down silos, integrating more purely IT activities with development activities. The structured approach also bridges the skills gap with technical and more purely functional expertise, making the entire chain of competence for introducing Generative AI into business processes productive. In this short article, we will refer to Generative AI as more simply AI, but many of the considerations also apply to Predictive AI concepts.
From experimentation to production: ROI, infrastructure and scalability
According to a recent Red Hat survey, Italian business leaders rightly expect a clear return on investment from AI adoption and identify 'Lack of clear business value or ROI' (31 per cent) and 'Insufficient infrastructure' (31 per cent) as the main barriers encountered.
Return on investment is certainly a crucial issue. Many organisations need to define specific KPIs during the trial phase in order to assess whether the use case under consideration really offers such a return.
In the case of infrastructure barriers, a key issue is the availability of specialised hardware and computing resources (e.g. GPUs) for the implementation of AI projects. In order to demonstrate the ROI of selected experimental initiatives, without having to deal with the cost and time of procuring dedicated hardware, organisations try to overcome this obstacle during the experimental phase by relying on infrastructure in the public cloud, which allows for fast procurement and a rapid feedback loop with moderate initial costs. However, when moving to the production phase, there are data privacy, security, scalability and cost predictability considerations that typically require an on-premise or private cloud environment.
According to Red Hat, an open hybrid cloud approach by integrating the existing on-premise infrastructure with public cloud resources addresses all the considerations listed above and also aligns with the long-term priorities of Italian companies such as 'Realignment of cloud strategy for AI' (56 per cent) and 'Cost optimisation' (55 per cent).


