Intervention

The skills needed to scale AI: roles, methods and organisational culture

Skills, governance and collaboration are key to turning AI into a competitive advantage

by Giovanni Pirola*

5' min read

Translated by AI
Versione italiana

5' min read

Translated by AI
Versione italiana

In the process of adopting solutions based on Generative AI, the real differentiator is not so much the model (or LLM), but the ability of an organisation to use it in production in a secure and scalable way. Roles and methods are clear: AI platform engineering defines the right infrastructure to serve workloads in a flexible manner, open standards to provide freedom of choice, and leadership capable of governing the transformation process. The maturity of any technology adoption path is measured by ROI, solution deployment time and security policy, not by the sheer number of experiments performed. A structured approach, moving from proof-of-concept to production, should foster collaboration and help break down silos, integrating more purely IT activities with development activities. The structured approach also bridges the skills gap with technical and more purely functional expertise, making the entire chain of competence for introducing Generative AI into business processes productive. In this short article, we will refer to Generative AI as more simply AI, but many of the considerations also apply to Predictive AI concepts.

From experimentation to production: ROI, infrastructure and scalability

According to a recent Red Hat survey, Italian business leaders rightly expect a clear return on investment from AI adoption and identify 'Lack of clear business value or ROI' (31 per cent) and 'Insufficient infrastructure' (31 per cent) as the main barriers encountered.

Loading...

Return on investment is certainly a crucial issue. Many organisations need to define specific KPIs during the trial phase in order to assess whether the use case under consideration really offers such a return.

In the case of infrastructure barriers, a key issue is the availability of specialised hardware and computing resources (e.g. GPUs) for the implementation of AI projects. In order to demonstrate the ROI of selected experimental initiatives, without having to deal with the cost and time of procuring dedicated hardware, organisations try to overcome this obstacle during the experimental phase by relying on infrastructure in the public cloud, which allows for fast procurement and a rapid feedback loop with moderate initial costs. However, when moving to the production phase, there are data privacy, security, scalability and cost predictability considerations that typically require an on-premise or private cloud environment.

According to Red Hat, an open hybrid cloud approach by integrating the existing on-premise infrastructure with public cloud resources addresses all the considerations listed above and also aligns with the long-term priorities of Italian companies such as 'Realignment of cloud strategy for AI' (56 per cent) and 'Cost optimisation' (55 per cent).

In particular, optimisation of inference becomes crucial: the efficiency, reliability and performance of a generative AI solution depend to a large extent on the inference server. Forecasts such as those by Gartner indicate that by 2028, more than 80 per cent of the accelerated computing resources used for training today will be allocated to inference. Therefore, solutions that provide efficient and stable inference capabilities, such as vLLM and Kubernetes-based platforms to orchestrate workloads on distributed hardware, are key to turning AI experiments into tangible value.

A hybrid approach, based on optimised servers, helps to break out of the vicious circle of endless proof-of-concepts that never translate into production solutions.

The 'Build or Buy' dilemma: skills and organisational silos

To address the multiple use cases that AI can help implement, many companies have undertaken parallel initiatives in different departments. While this may seem like the most effective choice, we note that it often creates a duplication of effort by not optimally utilising the time of a talent pool that is currently in short supply in the market; this approach risks generating additional internal silos, identified as a barrier to cloud and AI adoption by 41% of Italian companies surveyed. This is primarily a cultural and organisational challenge.

We believe that the problem of silos can be mitigated by adopting an approach to solution design that combines IT/operational expertise with domain expertise, through activities such as:

- Customised analysis: understanding of requirements carried out jointly among solution stakeholders.

- Selection of initiatives based on the expected value of the initiative

- Shared roadmap: clear planning to achieve goals set by management.

- Provision of comprehensive support: assistance during design, implementation and adoption, specialised on different professionals, requiring different consulting or training contributions.

- Widespread training tailored to the specific needs of the various professionals

An overview that helps overcome organisational barriers can lead to a higher probability of project success.

In the context of AI projects, a key issue is that of skills: 40 per cent of respondents agree that there is a significant skills gap in Generative AI, and for 49 per cent the main perceived gap is in the 'human and personal skills' category, rather than in purely technical skills. We need professionals capable of implementing, managing, securing and scaling the AI infrastructure in the complex environments where it is most needed, but we also need to develop the communication and collaboration skills that allow different groups to interact fruitfully.

We need AI platform engineers who understand:

- Designing AI workload-ready platforms: creating distributed, resilient and scalable systems to run different workloads;

- The need to adopt Open Standards: guaranteeing flexibility and freedom of choice between different suppliers;

- The adoption of a large-scale security policy (as identified in the many Sovereign Cloud initiatives): protection of AI infrastructure and data access and maintenance processes in complex, multi-tenant environments.

These skills are the dividing line between organisations and even nations that have ambitions in the field of AI and those that have real capabilities in this area.

Shadow AI and open leadership: from risk to governed innovation

The challenge of Shadow AI, i.e. the use of unauthorised AI tools by employees, is observed by 93% of the organisations surveyed by Red Hat in Italy. Shadow AI introduces new risks and again underlines the need for internal training and the introduction of guardrails (i.e. control and verification systems), supported by platforms available to all teams in the company, governed and designed with users in mind.

At the same time, 'Shadow AI' can also be seen as a positive signal for business leaders, as it indicates employees' ambition and desire to innovate.

The challenge, therefore, is not to stifle this bottom-up energy, but to channel it in a safe and productive way. Overcoming these barriers requires moving from a collection of fragmented tools to a unified platform strategy. IT leaders and AI developers in Italy already recognise the way forward, as 70 per cent agree that open source technology is important for their AI strategy. Thus, an open source AI platform with enterprise support available to all employees can offer the consistency and control needed to build, deploy and manage generative AI on any hardware and any cloud provider.

Organisations need a governed environment where teams can access the tools they need to experiment and develop solutions with confidence. Instead of adopting heterogeneous systems in departmental silos (the main barrier to AI adoption), using an open source enterprise platform helps capitalise and share knowledge and avoids investing time 'reinventing the wheel'. This approach enables IT to enable innovation, not block it, helping to meet one of the top AI priorities for 52 per cent of respondents: transparency and openness. Open source provides this transparency and increases standardisation, helping businesses maintain control over decisions on AI approach and data management, listed as a priority for 51 per cent of respondents

From skills to impact: making AI productive

The pressure to modernise and generate value through Generative AI and digital transformation is immense, but so are the risks, from vendor dependency to Shadow AI and the skills gap. It is advisable to take an approach that encourages collaboration from the outset between all the skills required to implement the solution, starting with data management and ending with application integration and the management of a secure and scalable infrastructure. Organisations that invest in platform engineering, open standards and open leadership will turn the adoption of Generative AI into a lasting advantage.

*Director, Go To Market Services, Red Hat Italy

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti