Press enter to search

Beyond IT: A case for a new professionalisation in AI

The release of ChatGPT in 2022 caused a lot of anxiety within many organisations, worried that employees would blindly rely on the chatbot for their day-to-day tasks. Tech companies, like Samsung, even banned the software after uncovering that employees accidentally leaked sensitive data when using it. But is ChatGPT really what we should be picturing when we think about artificial intelligence (AI)? And what really are the prospects for AI in the humanitarian sector? 

According to the European Union High-Level Expert Group, artificial intelligence is considered any software or hardware system that interprets available data to decide the best action to take to achieve a predetermined goal. In the humanitarian sector, AI has already been put to use in a myriad of ways. For example, the International Committee of the Red Cross (ICRC) created the “Trace the Face” tool to reunite families that have been separated during the Syrian Refugee Crisis. AI has also been used in disaster response; the Qatar Computing Research Institute, part of Hamad Bin Khalifa University designed the Artificial Intelligence Digital Response (AIDR) tool, which uses information and images from social media platforms to map damaged infrastructure and vulnerable populations. 

The deployment of artificial intelligence in the humanitarian sector through projects like these does not come without risk. For instance, facial recognition systems, like the one “Trace the Face” relies on, have been known to be less accurate in identifying people with darker skin because they were trained on racially homogeneous data sets. AIDR’s technology raises questions about data acquisition and privacy. Crowdsourced data from social media platforms could contain misinformation and inaccuracies, making the AI systems that rely on them untrustworthy. Moreover, navigating these risks presents new challenges to the already complex work environment of humanitarian organisations and staff. 

What does New Professionalisation look like? 

It may seem like the logical next step is to ask: are the risks associated with AI worth the potential rewards? But I would argue that this is the wrong question to be asking. The number of users and applications of AI will continue to grow, so refusing to adapt will only put the sector at a disadvantage. The more relevant question is, within the humanitarian sector, who is responsible for understanding the inner workings of AI tools? Who is responsible for ensuring they are secure and do not put civilians at risk?  

Within a humanitarian organisation, it is rare to find a team that specialises in the use of AI in current operations. Most often, the incorporation and management of AI systems are delegated to the IT team or an external technology consultancy group. Delegating to IT is problematic because there are too many responsibilities and specialisations within the field of AI to securely and effectively design and manage these systems. Alternatively, outsourcing to an external organisation may work in the short term, but employees across the organisation must have a basic understanding of AI software and continued support if problems arise in the field. 

Instead, humanitarian organisations could establish teams with the sole purpose of designing and managing AI systems. These teams would include innovation experts, tasked with brainstorming and designing new technologies; data security analysts, who consult on matters of security and privacy in the design phase and monitor the deployment phase of new AI technology; and AI specialists, who train staff outside of the team to learn the inner workings of the latest technology and offer real-time support when they run into problems. 

Members of these teams must be well-trained in both humanitarian response and socially responsible computing. However, attracting talented computer scientists may be a challenge for humanitarian organisations that are competing against large tech corporations that may be able to offer higher income and better benefits. To overcome this hurdle, organisations could invest at the undergraduate and graduate level by partnering with universities to offer programs and specialised training in humanitarian response for computer scientists, teaching about the impact of their technological creations. Programs like this are already popping up. For example, the Science and Technology for Humanitarian Action Challenges, a partnership between the Swiss Institute of Technology Lausanne, ETH Zurich, and the ICRC, is a project to support tech research for humanitarian response.  By engaging in humanitarian research in their postgraduate studies, these researchers and students are more likely to develop the skills and passion for a role on a humanitarian AI team. 


Humanitarian response has always been a collaboration of individuals from a range of professional backgrounds like doctors, fundraisers, educators, engineers, and policymakers. With the recent advances in artificial intelligence technologies, it is time for the humanitarian sector to include a new profession: AI specialists. The formation of a new team dedicated to the incorporation of AI will allow humanitarian organisations to harness the potential of AI while minimising the risks associated with it.  The humanitarian sector has an obligation to investigate and use all the tools available to them to help civilians in need. Right now, AI is the next and vital step.


(Photo: Gertrūda Valasevičiūtė on UnSplash)