AI Ethics

Navigating the Right to Humanity in Human and AI Collaboration

Griffin Bohm
April 19, 2024
5 min read
Full name
11 Jan 2022
5 min read

AI Ethics and the Blurring Boundaries Between Humans and AI

Every day, the boundaries between humans and Artificial Intelligence (AI) blur a little bit more. It feels like a human. It reads like a human. But is it really a human? While chatGPT and similar large language models (LLMs) can be convincingly human-like, a far thornier topic arises when we delve into AI and human collaboration.

AI Interfacing With People

Language models like OpenAI's GPT-4 can generate coherent and meaningful text, while DeepMind’s AlphaGo exhibits strategic decision-making abilities that outshine human champions. Such technological breakthroughs have made AI appear increasingly human-like, prompting vital ethical questions around transparency, deception, and autonomy.

Esteemed thinkers in technology ethics, such as Timnit Gebru, Co-Founder of Black in AI, and Ryan Calo, Co-Director of the University of Washington’s Tech Policy Lab, have underlined the importance of AI transparency. Gebru's work explores how opaque AI can perpetuate systemic bias, and Calo suggests that deceptive AI could undermine trust in digital interactions.

The right to know that we are interacting with a machine, not a human, has been supported by several industry bodies. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems recommends that "autonomous systems should be designed and operated in a way that respects human rights, identity, and privacy." Similarly, the EU's Ethics Guidelines for Trustworthy AI emphasize transparency, declaring, "AI systems must be transparent...in disclosing their capabilities and limitations."

As AI proliferates and becomes increasingly commonplace, a new ethical dimension arises that demands our attention – the right to a human.

A robot with human-like facial features, symbolizing the blending of AI and human elements in nonprofit fundraising.
Balancing technology and humanity is key in ethical AI-driven fundraising.

The Principle of the Right to a Human

This principle espouses that individuals should be aware when they are interacting with an AI versus a fellow human being. This declaration introduces more questions than it answers. For instance, it may be clear when we are typing a message in chatGPT that we're interacting with AI. But what about subtler situations where an AI tool drafts an email, or an automated system interacts with us on social media? How much AI involvement necessitates disclosure?

In the coming weeks, in partnership with fundraising.ai, we will be exploring some of the key questions in how nonprofits and fundraisers utilize AI.

Key Questions for AI Usage in Fundraising

  • When should AI involvement be disclosed?
  • How should this disclosure be made?
  • At what point does something qualify as being AI-generated?
  • How responsible are humans for what AI’s produce?

Beyond the Scope of Fundraising

Of course, these are deep questions that go far beyond the scope of fundraising and into the public square. Hopefully, with time, ethical frameworks will emerge that can apply broadly to all uses of AI.

If an AI drafts a grant that gets edited, changed, redrafted, and eventually submitted by humans, who deserves credit? What if the grant is discussed, debated, and chewed on, but ultimately left largely unchanged?

In the context of a grant-application, the question of who deserves credit may not actually be highly material. Grants are often submitted with either many contributing authors, or more frequently, no authorship is attributed beyond simply the name of the organization submitting it.

And, in the theoretically meritocratic world of grant-making, it actually shouldn’t have a huge impact who wrote the application, as the reviewer will simply care about whether or not an organization qualifies and deserves the grant being made.

We propose a potential guideline: if more than 50% of any interaction is conducted or facilitated by AI, disclosure should be mandatory. However, this raises additional questions. How do we measure the 'percentage' of AI in an interaction? It's one thing when an AI conducts an interaction start-to-end, like an automated chatbot. But it's quite another when AI aids a human, like drafting emails or generating reports. In the latter case, AI is an assistive tool, not the primary actor.

Retro balance scale, symbolizing the ethical balance in AI use in nonprofit fundraising.
Striking the right balance between AI capabilities and ethical considerations is crucial in modern fundraising.

While percentages may serve as starting points, they will not capture the full nuance of an interaction. The context and potential impact of the interaction should guide the need for disclosure. For instance, AI involvement in generating spam email may seem inconsequential, but AI's role in drafting a sensitive legal document is much more significant.

Moreover, the 'disclosure' should be designed in a way that is understandable and meaningful to users, instead of merely ticking a box for compliance. A pop-up saying, "This interaction involves AI," might be technically correct but doesn't help users understand what that truly means. What role did the AI play? What are its limitations? Users need comprehensive yet digestible information to make informed decisions.

Safeguarding Our Right to Humanity

In conclusion, the right to humanity, recognizing people's need to know when they're interacting with an AI, is a crucial ethical concern in our increasingly digital world. Transparency isn't a one-size-fits-all concept, but a nuanced principle that requires careful implementation. As we move forward, we need more conversations involving ethicists, technologists, lawmakers, and, importantly, the public, to shape the guidelines that will navigate our interactions with AI, safeguarding our right to humanity.

Subscribe to our newsletter

Get the latest information on major gift fundraising, donor psychology and more. Straight to your inbox.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.