Artificial intelligence, Dynamo opens discussion with the third sector
The Foundation released the document that puts boundaries on responsible use. And questions veracity and relationships
We all use it, with the guidance often entrusted solely to common sense. Yet for social organisations, such as third sector organisations, the governance of artificial intelligence is a sensitive issue. Large international organisations, such as Doctors Without Borders, have a policy for its use with ethical principles and rules, but many others are still at work. In Italy, Fondazione Dynamo Camp has just released its policy by making it public on the Dynamo Academy website and proposing a debate on the issue to the entire third sector.
Digital Transformation
"This policy is fully in line with our path of transformation of our database and our way of working through various digital tools, including artificial intelligence," explains Serena Porcari, CEO of Fondazione Dynamo Camp, which offers free recreational therapy programmes to minors suffering from serious or chronic illnesses. "By experimenting, we understood what the first steps to take were, starting with Dynamo's intellectual capital shared by all the teams.
With the growth of donations, the organisation started a decisive digital transformation by investing heavily in digital architecture. The database was made responsive and segmentable. Then Dynamo's 70 people were trained. Gpt was integrated into the work teams. In addition to the generic Dynamo Gpt (with information about the organisation and its aims), a customised Gpt was created for each work area according to what it has to return as output and whose results enrich the general Dynamo Gpt. On the fundraising side, data management is integrated with Ai so that each donor can be sent a personalised communication based on their profile and donation history.
When to use the Ai?
The document 'Policy for the Responsible Use of Artificial Intelligence' lists the guiding principles as centrality of the person, inclusion and accessibility, protection of persons, authenticity and integrity of communication, responsibility and human supervision. But how to translate these principles into practice? The central question for the foundation is not "can we use Ai?" but "does it make sense to use it in this specific context?" In short, using Ai in a critical and conscious way also means choosing not to use it when it is not necessary.
In order to concretely assess the needs, the charter calls for the identification of the problem or process to be solved or improved respectively, the actual added value that EI can bring in terms of efficiency, accuracy, accessibility, transparency or resource savings, the sustainability of adoption in terms of cost, the skills required, the ability to evaluate the results, the time for learning, updating and monitoring, and the existence of equally or more effective non-technological (or less complex) alternatives.



