Digital Economy

Artificial intelligence, Dynamo opens discussion with the third sector

The Foundation released the document that puts boundaries on responsible use. And questions veracity and relationships

by Alessia Maccaferri

Il camp. È situato a Limestre in provincia di Pistoia al limitare di un’oasi di oltre 900 ettari affiliata al Wwf

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

We all use it, with the guidance often entrusted solely to common sense. Yet for social organisations, such as third sector organisations, the governance of artificial intelligence is a sensitive issue. Large international organisations, such as Doctors Without Borders, have a policy for its use with ethical principles and rules, but many others are still at work. In Italy, Fondazione Dynamo Camp has just released its policy by making it public on the Dynamo Academy website and proposing a debate on the issue to the entire third sector.

Digital Transformation

"This policy is fully in line with our path of transformation of our database and our way of working through various digital tools, including artificial intelligence," explains Serena Porcari, CEO of Fondazione Dynamo Camp, which offers free recreational therapy programmes to minors suffering from serious or chronic illnesses. "By experimenting, we understood what the first steps to take were, starting with Dynamo's intellectual capital shared by all the teams.

Loading...

With the growth of donations, the organisation started a decisive digital transformation by investing heavily in digital architecture. The database was made responsive and segmentable. Then Dynamo's 70 people were trained. Gpt was integrated into the work teams. In addition to the generic Dynamo Gpt (with information about the organisation and its aims), a customised Gpt was created for each work area according to what it has to return as output and whose results enrich the general Dynamo Gpt. On the fundraising side, data management is integrated with Ai so that each donor can be sent a personalised communication based on their profile and donation history.

When to use the Ai?

The document 'Policy for the Responsible Use of Artificial Intelligence' lists the guiding principles as centrality of the person, inclusion and accessibility, protection of persons, authenticity and integrity of communication, responsibility and human supervision. But how to translate these principles into practice? The central question for the foundation is not "can we use Ai?" but "does it make sense to use it in this specific context?" In short, using Ai in a critical and conscious way also means choosing not to use it when it is not necessary.

In order to concretely assess the needs, the charter calls for the identification of the problem or process to be solved or improved respectively, the actual added value that EI can bring in terms of efficiency, accuracy, accessibility, transparency or resource savings, the sustainability of adoption in terms of cost, the skills required, the ability to evaluate the results, the time for learning, updating and monitoring, and the existence of equally or more effective non-technological (or less complex) alternatives.

'The innovation we are interested in is not the technology per se but the governance of these tools, i.e. how we drive the Ai,' points out Mattia Dell'Era, chief digital officer of Dynamo. 'No diagnoses, no clinical evaluations, and no automated decisions on people are allowed. It must be clear for example that it is not allowed to publish content entirely generated by Ai without reformulation and adaptation, we will never create text and video content with virtual beneficiaries or invented stories'.

Risk to relationships

Fondazione Dynamo tackles a significant aspect for a large part of the third sector when it affirms that Ai raises critical issues such as "truthfulness and authenticity of the content generated (in contrast with the experiential dimension on which Recreational Therapy is based) and the potential risk of replacing human-relational content". In short, the risk is that Ai intervenes - by distorting it - on the greatest value of those who work in the social economy, namely the relationship, an indispensable capital of trust.

The policy involves not only employees and collaborators but also technology partners and suppliers of Ai-based solutions and third parties. 'We will probably also include this policy in purchasing contracts,' adds Porcari. In addition, the foundation has provided for impact assessment tools. "We would like to find Kpi's, perhaps with the help of external partners."

Copyright reserved ©
  • Alessia Maccaferri

    Alessia MaccaferriCaposervizio Nòva 24 - Il Sole 24 Ore

    Luogo: Milano

    Lingue parlate: italiano, inglese

    Argomenti: innovazione sociale, impact investing, filantropia, fundraising, smart cities, turismo digitale, musei digitali, tracciabilità 4.0, smart port

    Premi: Premio Sodalitas (2008), premio Natale Ucsi (2006), European Science Writer Award (2010)

Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti