Responsible Tech Guide by All Tech Is Human
Responsible Tech Guide by All Tech Is Human
Guide | 2024
N
D
•L
C
•D
YC
N
Introduction 4
An Evolving Ecosystem 16
Responsible AI 22
Trust & Safety 51
Youth, Tech, and Wellbeing 65
Public Interest Technology 85
Cyber & Democracy 105
Tech Policy 118
Acknowledgments 155
Introduction
Welcome
If you are interested in tackling thorny tech and society issues, you have
found the right place.
It is easy to feel like you are alone in deeply caring about ensuring that our
tech future is aligned with public interest, but in reality there is a vibrant
Responsible Tech ecosystem comprised of thousands of people from a
wide variety of backgrounds,
disciplines, and perspectives. This is
your invitation to join in and find
where you can add value; whether
that’s change through civil society,
government, industry, or
academia.
This year, we synthesized learnings from our mentorship program, large Slack
community, working groups, livestreams, and dozens of in-person gatherings
to provide a comprehensive and informed view of the Responsible Tech
ecosystem and how you can best contribute your perspective and add value
as we tackle complex tech and society issues.
The Responsible Tech Guide, along with all of the work of All Tech Is Human
(ATIH), is focused on three pillars for creating a strong Responsible Tech
ecosystem: building community, increasing education, and providing
career resources and guidance.
Gemma Galdon Clavell Johanna Weaver Dr. Joy Buolamwini June Okal
Page 45 Page 137 Page 36 Page 134
Responsible
demystify the
statistical foundations
of "AI" and "Big Information
Tech Data"? Science
How is information
Philosophy stored and
Sociology disseminated online?
How can we harness
theories of In what ways does
philosophy and technology impact
ethics to better our social
shape technology? organizations and
relationships?
Art
Social Work How does
Psychology Health How can we apprise technology shape the
How can individuals about way we view art and
What role can
technology media?
Community technology play in their digital rights and
influence our minds developing a more protections?
and behavior? Development
equitable
How can
healthcare system?
communities Policy
leverage technology As technology
for equity and becomes more
access? ingrained in society,
how can policy
evolve to the voice
of citizens?
The line waiting to get into ATIH’s Responsible Tech Mixer + Book Launch With Kashmir Hill in NYC.
41%
Individuals in the
tech industry make
20%
up a plurality of the
network
C-suite/Exec-level
98
25% Entry-level
Countries ATIH
reaches globally*
Average YoY
growth rate
on LinkedIn 163%
NY SF DC
LONDON
100+
100+ cities globally
with at least 40+
Top geographic locations members each
Download 2024 Ecosystem Pulse Report *Countries ATIH reaches globally was
updated August 2024
02 To align our tech future with the public interest, we need to involve the
public.
06 People often struggle to “find the others” and discover the wide
variety of people and orgs committed to co-creating a better tech
future.
09 Tech innovation moves fast, while our consideration of its impact often
moves slow. We need to reduce the gulf between these.
We cannot align our tech future with the public interest unless we actively
involve the public. All Tech Is Human’s approach brings together people of
all backgrounds and skill levels to learn from each other, build community,
and co-create a better tech future. Find out more at alltechishuman.org.
An Evolving Ecosystem
Evolution of Responsible Tech
Generative AI
OpenAI released ChatGPT in November of 2022, and with it came the
widespread use of consumer chatbots in classrooms and workplaces around
the world. The broad accessibility of consumer chatbots brought responsible
AI considerations into common consciousness.
2024
The EU AI Act comes into
force, regulating the use
of AI to ensure safe,
2024 transparent, and non-
US Senate passes The Kids discriminatory practices.
Online Services Act, (August)
protecting minors from
harmful material on social
channels and creating a
2023
‘duty of care’ for online US issues Executive order
platforms. (July) on Artificial Intelligence,
establishing new standards
for safe, secure, and
trustworthy AI. (October)
2022
OpenAI releases ChatGPT,
kicking off the consumer
Generative AI era.
(November)
2022
Digital Services Act is
adopted by the EU,
addressing disinformation
2022 and illegal content.
Elon Musk buys Twitter and (October)
eliminates roughly 80% of its
staff, including entire Trust &
Safety teams. (April)
2018
General Data Protection
Regulation (GDPR) takes
effect, focusing on
information privacy. (May)
Tech Policy
Tech Policy professionals work on regulation and governance of current
and emerging technologies. This can happen through company and
industry policies as well as through government frameworks and
legislation. As technology continues to touch so many aspects of society,
tech policy is intertwined with technology’s impact on individuals, society
and public interest. As a result, tech policy is very multidisciplinary and
can cover a wide range of issues from advocacy, freedom of expression,
education, health, and online safety just to name a few.
Responsible AI
Responsible AI
Responsible AI has been a In 2024, our focus has shifted from
uncovering potential harms to
key focus for ATIH since our
identifying the best governance
founding in 2018 when the mechanisms for aligning AI usage
first sets of principles and with the public interest.
guidelines were being
This year, we have been providing
drawn up by pioneering opportunities to understand and
NGOs and early-adopting engage in evaluation of artificial
tech giants. intelligence systems through public
accountability mechanisms,
Immediately following the release of including:
our Responsible Tech Guide in
2020, we formed a working group An AI Governance maturity
and released our first RAI report, model we released in
“The Business Case for AI Ethics.” partnership with TechBetter,
An algorithmic bias bounty
In the few short years since, we challenge with Humane
have seen RAI’s center of gravity Intelligence that we co-hosted,
shift from novel and theoretical A partnership with Thorn to
societal harms - the purview of secure commitments from
academic researchers - to universal Amazon, Anthropic, Google,
concerns that are of interest to Meta, Microsoft, OpenAI, and
every person with a smartphone others to guard against AI-
and access to a consumer chatbot. generated child sexual abuse
material (AIG-CSAM).
RAI impacts are ubiquitous now:
Automated decision-making With these initiatives, we seek to not
systems only provide access to opportunities
Biometric surveillance tech but also to maximize the impact of
Autonomous weapons our collective efforts.
Deepfakes & info integrity
Job displacement
Security and safety risks
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 23
RESPONSIBLE AI
Transparency: Transparency
reflects the extent to which
information about an AI system and
its outputs are available to
individuals interacting with such a
system – regardless of whether they
are even aware that they are doing
so. (NIST)
International Association of
Algorithmic Auditors (IAAA): “The
IAAA is a community of practice that
aims to advance and organize the
algorithmic auditing profession,
promote AI auditing standards,
certify best practices and contribute
to the emergence of Responsible
AI.”
ATIH Livestreams
How does your role help tackle thorny tech & society issues?
I am a poet of code and the author of Unmasking AI: My Mission to Protect
What is Human in a World of Machines. I tell stories that make daughters of
diaspora dream and sons of privilege pause. As the founder of the
Algorithmic Justice League, I lead an organization that uses poetry, creative
science communication, and research to prevent the deployment of harmful
AI systems and increase accountability in the creation of technology.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
Especially earlier in my career, I sensed that focusing too much on subjective
experience or leaning into my artistic side might lead to not being taken
seriously. There’s a genuine concern that if you don’t come from a technical
enough background or if you show artistic inclinations you may be dismissed
by the tech establishment. However, in my work, art has been crucial to
advancing the conversation about algorithmic bias, making ideas more
accessible, and adding nuance.
For example, when I was a member of the EU Global Tech Panel, I shared a
video of my spoken word algorithmic audit poem, “AI Ain’t I A Woman?”
which shows different facial analysis technologies failing on the faces of
iconic Black women. With that piece, I highlight how these technologies fail
and what those failures mean. I reference Sojourner Truth’s “Ain’t I A Woman”
speech and her struggle to be recognized in her full humanity within a
political system. Simultaneously, you see on screen a facial detection system
failing to find a human face in a picture of a young Oprah Winfrey. This art
piece informed conversations about the integration of computer vision
systems including facial recognition into lethal autonomous systems. The
power of storytelling in that work connected me with everyone in the room,
allowing me to speak truth to power. In a time when creative work is being
mechanized and devalued, I urge us to keep making space for artists,
storytellers, and poets.
What is your vision of a better tech future and how we can move towards
that vision?
As I write in Unmasking AI, Algorithmic Justice means the people have a
voice and a choice in determining and shaping the algorithmic decisions that
shape their lives. That when harms are perpetuated by AI systems there is
accountability in the form of redress to correct the harms inflicted. That we
do not settle on notions of fairness that
do not take historical and social factor
into account. That the creators of AI
reflect their societies, That data does
not destine you to unfair discrimination.
That your hue is not a cue to dismiss
your humanity. And that AI is for the
people and by the people and not just
privileged few. We get there by
developing mechanisms and
communities of practice that surface
the limitations of AI systems so we can
reach our greater aspirations.
Sinead Bovell
Futurist, WAYE Founder
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
My background is rooted in finance and chemistry, with a master’s in
business, where I was first introduced to exponential technologies and the
practice of strategic foresight. This sparked my passion for understanding
the future of innovation. After transitioning into management consulting, I
took an unexpected turn into the fashion industry, which ultimately led me to
launch my own tech education company. One of the key lessons I’ve
learned is that you have the right to rewrite your career story as many times
as necessary. Your career grows and evolves alongside you, reflecting your
experiences and aspirations.
What is your vision of a better tech future and how can we move towards
that vision?
My vision of a better tech future is one that reflects the diverse aspirations
of society, not just those of a select few with outsized resources. It’s a future
where we adopt a long-term lens to planning, allowing us to proactively
shape the future rather than simply react to it. In this future, individuals feel
empowered by the decisions we make today, knowing that these choices
are informed by a broader, more inclusive vision. By embracing strategic
foresight and thoughtful collaboration, we can create a future where
technology serves the collective good and aligns with the values and needs
of all.
Navrina Singh
Founder and CEO, Credo AI
“I believe we need more representation from marginalized
communities—those who have historically been overlooked
in tech. These are the voices most impacted by AI bias,
disinformation, and lack of inclusivity, yet they are often
absent from the conversations shaping AI's future.”
In your opinion, what is the biggest tech &
society issue we are currently facing?
The biggest tech and society issue we’re
facing today is the erosion of trust in AI and
technology as a whole. While AI offers
immense potential to solve complex global
challenges, it also introduces significant
risks—such as bias, privacy invasion,
disinformation, and lack of accountability.
The rapid pace of AI development has
outstripped the creation of governance
frameworks to manage these risks and
emerging risks, leaving a dangerous gap
between innovation and responsible use.
This gap not only threatens societal values
but also national competitiveness.
Countries and organizations that fail to implement responsible AI
governance will lose the trust of their citizens and global partners, ultimately
falling behind in the race for AI leadership. The future of global
competitiveness hinges on the ability to balance innovation with ethical
responsibility. At Credo AI, we believe that closing this gap is critical to
ensuring AI serves humanity in a positive, equitable way. Our mission at
Credo AI is to create the infrastructure of trust for AI, helping organizations
govern their AI systems responsibly and ensuring that technology empowers
rather than harms. By embedding transparency, accountability, and values
into AI systems, we help organizations build trust, drive innovation
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 41
RESPONSIBLE AI
responsibly, and ensure that their AI systems empower rather than harm.
Trust is not just essential for societal well-being—it’s the foundation for
sustained leadership in the AI-driven economy.
How does your role help tackle thorny tech & society issues?
As the Founder and CEO of Credo AI, my role is centered around building
the infrastructure of trust for AI and ensuring that this transformative
technology is developed and deployed responsibly. I lead our vision and
strategy to address critical challenges at the intersection of technology and
society, focusing on creating governance frameworks that align AI with
ethical standards. I collaborate closely with governments, global
organizations, and enterprises to implement accountability measures for AI
systems.
A key part of my role is bridging the gap between innovation and regulation,
ensuring that AI's rapid development doesn't outpace our ability to manage
its societal impacts. This is vital not only for protecting human rights but also
for maintaining national competitiveness in the global AI race. At Credo AI,
we create software tools and frameworks that help organizations align their
AI systems with their values, standards, and regulations. We guide them in
navigating complex AI capabilities and understanding its risks, ensuring their
AI technologies contribute positively to society while driving business
growth. In this capacity, I’m privileged to lead a movement that is
transforming how we govern AI, making sure it remains a force for both
societal progress and national innovation.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I founded Credo AI in March 2020, drawing on over 20 years of experience
in technology, including roles at Qualcomm and Microsoft, as well as
advisory positions with Mozilla AI, the U.S. Department of Commerce, OECD
and the United Nations. Credo AI is the manifestation of my life’s work,
values, and convictions—ensuring AI serves humanity responsibly. My
journey started in India, where my father’s military service and my mother’s
dedication as an educator instilled in me the values of hard work, integrity,
and resilience.
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 42
RESPONSIBLE AI
What is your vision of a better tech future and how can we move towards
that vision?
My vision for a better tech future is one where innovation not only advances
rapidly but remains deeply aligned with human values. As we push the
boundaries of technology, including the development of non-human
intelligence and superintelligent systems, it is imperative that these
advancements are in service of humanity. In my view, the true power of AI lies
not in its ability to surpass human intelligence but in its potential to amplify
our collective well-being, creativity, and progress.
To move toward this future, we must ensure that AI and other emerging
technologies are built with governance and oversight. This requires a
profound shift—embedding governance and Responsible AI frameworks from
the outset, rather than treating them as afterthoughts. We must foster
collaboration across diverse voices, including technologists, ethicists, and
policymakers, and focus on creating systems that respect human rights and
dignity. At Credo AI, we are committed to driving this vision forward by
providing the tools to govern AI responsibly and ensure intelligence evolves
in ways that benefit all of humanity. Innovation without humanity is empty, but
when innovation is aligned with our core ethical principles, it can propel
society forward in ways we have yet to imagine.
How does your role help tackle thorny tech & society issues?
Tackling bias issues in AI requires more than principles and general
commitments -it demands specific engineering practices and fairness metrics.
After 15 years in the Tech Accountability space, we realized that for these
efforts to exist, they had to be validated. Eticas.ai is a startup focused on
developing AI auditing software that validates fairness efforts. As an
independent party, we can set the benchmarks and standards, and help
those who want to pioneer fairness tools by giving them a competitive
advantage.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I've been in the responsible data/tech space for over 15 years. I completed
my PhD on tech and policy and was one of the few people working on tech
from the social sciences. My multi-disciplinary work was so unique that it was
relatively easy to get attention and funding, and create a space for socio-
technical innovation around Responsible Tech. Today, we are one of the few
organizations doing technical work from a digital rights and AI accountability
perspective. So my career has not been linear nor planned! I would advice
anyone seeking to enter this space to be prepared to work very hard: we are
inventing the future. Our ability to correct the course of Big Tech will
determine the rights, quality of life and opportunities of generations to come.
What is your vision of a better tech future and how can we move towards
that vision?
A future where tech and AI contribute to fairness and equality instead of
eroding it. A future where tech helps us fix long-established discrimination
and power dynamics, instead of worsening them. And, of course, a future
where all AI is audited so that we can be sure that innovation outcomes are
safe and trustworthy.
Cansu Canca
Founder & Director, AI Ethics Lab; Director of
Responsible AI Practice & Research Associate
Professor in Philosophy, Northeastern University
“Technology exists to serve humanity, and its purpose
should align with our most fundamental values.”
As AI increasingly shapes our world and our lives, ensuring the ethical design
and transparent governance of these systems becomes critical. In other
words, the biggest issue is not a singular problem but a broader, complex
systems challenge: how to build and use AI to enhance social justice and
empower personal autonomy.
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 47
RESPONSIBLE AI
How does your role help tackle thorny tech & society issues?
My work is very hands-on. I am the lead AI ethicist to multidisciplinary teams
of computer scientists, philosophers, legal scholars, and designers. We
provide consulting to companies and organizations that are developing
and/or deploying AI systems to make their AI ethically better and to put in
place a responsible AI governance framework so that all AI that they develop
or procure is assessed for its risks and is not deployed unless potential risks
are effectively mitigated. For example, we help insurance companies, banks,
educational institutions, and healthcare providers create “fairer” AI models,
establish responsible AI workflows, and integrate ethics impact assessments
that engage key stakeholders, and create effective user interface for better
human decision-making using AI. Additionally, I advise international
organizations on AI ethics. This includes working with the UN and INTERPOL
to design and develop a responsible AI innovation toolkit for law enforcement
agencies, as well as as well as collaborating with the World Economic Forum
on a responsible AI investment playbook.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I am trained as a philosopher, specifically as an applied ethicist. I have a PhD
in philosophy, and I worked on ethics and health for over a decade,
conducting research on end-of-life decisions and life-saving resource
allocation, advising healthcare officials
on public health policy decisions,
consulting hospitals on complex cases,
and training physicians and medical
students. This background provided me
with robust technical knowledge and
tools from philosophy, along with real-life
experience in complex, life-and-death
situations. I transitioned to AI ethics
initially because of AI-driven healthcare
technologies and the lack of ethical
assessment of these echnologies. I was
too early—this was around 2015 when I Photo from Mozilla Rise25 award ceremony, held
first began exploring AI ethics. August 13, 2024 in Dublin, Ireland.
At the time, the role of "AI ethicist" was not yet a thing, and there were no
positions available in academia or industry. So, I founded AI Ethics Lab to fill
this evident gap. I focused on building knowledge and networks, diving
deeply into AI ethics problems, collaborating with practitioners and
researchers, designing tools, developing methods, and creating training.
That said, philosophers cannot work in isolation. To truly navigate the ethical
challenges of AI, we need the insights of social scientists, computer scientists,
engineers, and policy experts. Another critical, yet frequently
underrepresented, voice in this ecosystem is that of designers. The ethical
use of AI hinges on how well people understand these systems, and that
understanding is largely shaped by design. User interfaces play a vital role in
communicating what AI can and cannot do, making designers essential to the
responsible tech conversation—not peripheral participants, but integral
partners in creating ethical AI systems.
What is your vision of a better tech future and how can we move towards
that vision?
We live in a deeply unfair and unsustainable world, where human life is driven
and shaped by a curious measure called “productivity”. But technology,
particularly AI, has the potential to shift the balance toward a more just
distribution of wealth and opportunity, a more sustainable planet, and lives
driven by meaning rather than market utility. While this vision may seem
utopian, every step toward it matters. Can we leverage AI to advance racial
justice in the criminal justice system, promote gender equity in healthcare, or
foster financial inclusion for marginalized communities? Can we harness AI’s
productivity to empower everyone—not just the privileged few—to pursue
goals that they find personally meaningful? The answer is an unequivocal yes.
Technology exists to serve humanity, and its purpose should align with our
most fundamental values. As philosophers have long argued, this boils down
to reducing suffering (of all sentient beings), enhancing self-determination,
and upholding fairness and justice. By keeping these principles at the core of
AI development, we can work toward a future where technology genuinely
elevates humanity.
Camille François
Professor, Columbia University School of
International & Public Affairs
“I'd like our technological futures to be open, preferably open
source when that can be on the menu, with a generous side
of pluralism and safety.”
How does your role help tackle thorny tech & society issues?
I've been fortunate to tackle these issues from a few different vantage points:
as an executive leader in Silicon Valley, as a researcher and a professor, and
as an advisor to policymakers in the U.S., E.U. and France.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
II had plenty of luck, good mentors, and followed my curiosity wherever it led.
Sean Litton
President and CEO, Tech Coalition
How does your role help tackle thorny tech & society issues?
I lead the Tech Coalition, the industry alliance of more than 40 global tech
companies of varying sizes and services tackling the challenge of online child
sexual abuse and exploitation (OCSEA). Our efforts are focused on building
industry capacity to protect children as they enjoy the same online tools and
services we all use to connect, share, and learn. The Tech Coalition provides
a safe space for our industry members to collaborate on the most significant
issues relating to OCSEA and identify, develop and deliver new initiatives that
drive real results. We also provide critical insights, practical resources and
step-by-step guides to support individual companies as they build their
capacity to combat OCSEA. Lastly, we foster constructive engagement and
dialogue between industry and third-party stakeholders to enhance
understanding, build trust, and drive collective action to combat OCSEA.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I am a seminary drop out that found his calling and people at Notre Dame
Law School. After a few years as a lawyer with Kirkland & Ellis in Washington,
DC (extraordinary training for which I am forever grateful), I joined a startup
human rights organization called International Justice Mission (IJM) and
moved to the Philippines to work with local authorities to investigate and
prosecute cases of child sexual exploitation and abuse. This was supposed to
be a temporary break from practicing law in the US but I loved the work and
people so much that I stayed with IJM for 20 years, moving into leadership as
it continued to grow and eventually serving as President. In late 2020, I
began having conversations with several tech companies about leading the
relaunch of the Tech Coalition (it had originally started in 2006 but had
struggled to gain sustained momentum because it had no full-time staff).
Given the scale of the companies involved and their level of commitment, it
was clear to me that this was perhaps the single most leveraged and strategic
opportunity to drive global impact on child protection that I would ever
encounter. I gratefully accepted the opportunity to lead the Coalition and was
soon joined by a phenomenal team of leaders who have rapidly built
momentum and trust with our members. Together, we are driving real impact
for children. Here is my career advice: Take risks. Work hard. Be grateful.
Take care of yourself and your team.
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 63
TRUST & SAFETY
What is your vision of a better tech future and how we can more towards
that vision?
The Tech Coalition is trying to build a safer internet for kids. One that helps
them learn, play and creatively express themselves, but that also has
safeguards to protect them against people who want to harm them. This is a
whole of society challenge that requires a whole of society response. Child
sexual abuse is not a new problem. Certainly, the tech industry must do its
part to prevent, detect and report
attempted abuse online. This is my
focus every day. But as a society,
we need a more open and informed
dialogue on child sexual abuse. We
cannot avoid it. We cannot continue
to be surprised when it shows up in
an institution we trusted. Around the
world, there needs to be a greater
investment in ensuring individuals at
risk of offending against children
have access to the assistance they
need so that they do not offend.
There needs to be a greater
investment in the National Center
for Missing and Exploited Children’s Cybertipline that receives more than 30
million reports a year from industry relating to online child sexual exploitation
and abuse. And there needs to be a corresponding increase in investment in
global law enforcement to ensure that they have the capacity to respond to
these reports. Currently, they do not. We have to look at the whole pipeline.
Industry must do its part to ensure its products are safe AND we all have to
work together. This is what is required to keep our children safe.
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 64
AN EVOLVING ECOSYSTEM
Tell us about your career journey. How did your career grow, and what
advice would you give to others wanting to be in a similar position?
For the first 30 years I was a film director. I made feature films in the UK and
US and, between the narrative films, I made documentaries on issues that
interested me. In 2012, when the smartphone became the price point an adult
might give to a child, I made a film about children and the Internet. That film
changed my life. And led to me being a full-time legislator and campaigner.
Going from directing to policymaking may appear discordant, but many of the
competencies of bringing together a large team and walking them towards an
as yet unseeable goal have been helpful. And over the last decade or so,
many more people have joined in the fight for a different kind of
technologically-enabled childhood. One not based on commercial interest,
but one that would build the digital world children deserve.
Almost always they finish the workshop with a sense that they have found a
language to express what is happening to them. That they have understood that
the hours they spent doomscrolling was not so much their weakness as the
success of a product team that has spent months and millions of dollars to get
them to do just that.
Similarly, educators have a very clear idea of what good looks like, and which
aspects of digital norms and digital design are messing up the kids in their
classroom. Like the children, they are rarely seeking a tech-free world, but are
desperate for someone - anyone - to detoxify the race for attention that defines
this generation of products and services. The cost to learning of poor Ed Tech
that does not mirror pedagogical needs, the tired or anxious children who pay
more attention to their device than their learning, and the extraordinary cost of
mis/disinformation - whether about public health and elections, or the kid in their
class - is heartbreaking to educators. So, too, is the automated prevalence of
misogyny, suicide ideations, self-harm, pornography, and AI or otherwise child
sexual abuse. They don’t understand why they are so confined by what and how
they teach, but this is allowed to undermine children unabated and unpunished.
Parents are confused and guilty; Unable, sometimes unwilling, to set themselves
in opposition to something that has grasped hold of themselves as well as their
children. Family conflict and a feeling that there are only bad choices - many
parents perhaps even most - would like clear rules of the game on addictive
practices - because at least then they could make some realistic conditions.
Finally, the impact on communities is so huge, I believe they should have a say.
Let towns or communities say if they want Airbnb, some might prefer to
guarantee the supply of houses for locals. Let teachers (or education boards)
have pedagogical criteria for Ed Tech that can be easily understood rather than
flood the market with expensive untested services designed by generalists. Let
services that do accounting, sell cars or holidays, have to meet the professional
standards of the sector, make sure that surge pricing is transparent and
understood (maybe even limited) so that a girl alone at night is not suddenly
faced with walking home because they can no longer afford the ride and so on
and so forth. As it stands, we are paying the costs of a model that has innovated
its way out of responsibilities that are practical and protective of those that they
impact.
What is your vision of a better tech future and how can we move towards
that vision?
A responsible tech sector follows the needs and norms that we have already
created in our desire to live together with a commitment to delivering benefits,
security and wellbeing to the broadest number of people. Depending on how
you analyse how we got here, tech is either the cause or simply took advantage
of a period in which corporates have been given licence to make money without
making any artifacts. It has unleashed a movement of wealth to an ever-fewer,
ever-richer group of people. The freedoms they trumpet in their defence
(speech, movement of capital, responsibility for impact, etc.) are also unequally
distributed. So a female politician can give up the public square, give up her
right to wellbeing or give up her job - but does not have the freedom to go to
work unabused. Ditto, it is now best practice not to post more than 40% of a
child online so that they cannot be scraped and turned into AI child abuse
material. Similarly, the freedom to make products that entice others into a state
of compulsion, is something that costs society dearly, as does the attention-
seeking mis and disinformation that feeds conspiracy theories and that social
media coffers.
Trisha Prabhu
Founder and CEO, ReThink;
Founder and President, ReThink Citizens
“So many young people wonder, “What can I do? What can I
offer?” In fact, we young people are uniquely positioned to
build the digital world that we want to live in. So go for it!
Make your impact.”
How does your role help tackle thorny tech & society issues?
I’m the Founder and CEO of ReThink, a global movement with the mission of
eradicating online hate and cultivating a new generation of responsible digital
citizens. ur flagship product is our ReThink technology, an internationally
acclaimed tool that helps students pause, review, and ReThink before they
say something harmful or offensive online.
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 78
YOUTH, TECH, AND WELLBEING
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I started ReThink when I was just 13 years old. After experiencing and
witnessing cyberbullying, I knew that I couldn’t be a bystander -- I had to be
an Upstander. I was particularly passionate about ensuring that we weren’t
putting the burden on cyberbullying victims -- or on youth, generally -- to
respond to online hate; instead, I wanted a system that was safe by design.
The result was ReThink, a solution that
stopped cyberbullying before it happened.
My advice to fellow young people passionate
about building a better internet is to know
how powerful your voice is. So many young
people wonder, “What can I do? What can I
offer?” In fact, we young people are uniquely
positioned to build the digital world that we
want to live in. So go for it! Make your impact.
youth -- from all walks of life -- that are contributing to this work. It’s equally
crucial that we invite more people with lived experience with harm into this
work. These folks, more than anyone, know how the system is failing. They
know what needs to change. We need to learn from them; we need to center
their perspectives and insights. In doing so, we can ensure that they are the
last to suffer.
What is your vision of a better tech future and how can we move towards
that vision?
My vision of a better tech future is centered around an internet that is kind,
affirming and inclusive. It’s an internet where young people are accepted and
celebrated as they are. It’s an internet that young people help to
create/sustain. In that vein, in this future, young people are at the forefront of
the movement to build a just digital experience. Whether it’s in trust and
safety-related work, policy work, or technology innovation, young people
have the agency and support they need to construct the world that they want.
As a young person I recently spoke to said, “Wouldn’t that be awesome?”
Sonia Livingstone
LSE Professor and Director,
Digital Futures for Children
“So many young people wonder, ‘What can I do? What can I
offer?’ In fact, we young people are uniquely positioned to
build the digital world that we want to live in. So go for it!
Make your impact.”
How does your role help tackle thorny tech & society issues?
I lead the Digital Futures for Children (DFC) centre, a joint LSE and 5Rights
centre which facilitates research for a rights-respecting digital world for
children. We support an evidence base for advocacy, facilitate dialogue
between academics and policymakers, and amplify children’s voices, in
accordance with the UN Committee on the Rights of the Child’s authoritative
statement, General Comment No. 25, on how to implement the UNCRC in
relation to the digital environment.
A core ambition of our work is that providers of digital products and services
that impact children in one way or another should find ways to keep children
in mind throughout their work and embed children’s rights into their provision.
We have consulted innovators, practitioners, experts and children, learning
for example of designers’ everyday dilemmas about how to consult children,
meet the needs of different age groups, balance protection and participation,
and know when they have got it right. To find answers for them, we drew on
the collected wisdom of many rights-based, ethical and value-sensitive
organisations and we held a consultation with children.
The result is the DFC’s Child Rights by Design toolkit which guides providers
of digital products and services impacting children in ways to. Its 11 principles
are distilled from the articles of the UN Convention on the Rights of the Child:
1. Equity & Diversity, 2. Best Interests, 3. Consultation, 4. Age Appropriate, 5.
Responsibility, 6. Participation, 7. Privacy, 8. Safety, 9. Wellbeing, 10.
Development, and 11. Agency. See here
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I began as a social psychologist fascinated by questions of agency, voice and
power in our mediated world. My research has always taken a comparative,
critical and contextualised approach, examining how changing conditions of
mediation reshape everyday practices and possibilities for action. For the last
thirty years, I’ve focused on the digital lives of children and young people – at
home, with friends, at school and more broadly. My role as professor at the
London School of Economics and Political Science has supported me in
developing my research, teaching, networking and engagement in many
ways. We created a successful multidisciplinary Department of Media and
Communications. And more recently, with 5Rights Foundation, we’ve created
the Digital Futures for Children (DFC) centre to advance research and
advocacy at the intersection of children: rights: digital.
There are several ways of telling the story of my career (so far!), but they
probably all centre on the combination of passion, purpose and hard work.
I’ve been very fortunate in my mentors and collaborators over the years. I do
find it hard to offer advice, because everyone is different, as are the
conditions of their lives. But I might say – figure out what you value, what you
can contribute, who can help you and who you can help. For more about me,
see www.sonialivingstone.net.
In other words, all the needed backgrounds and voices are probably present,
but they may not yet all be comfortably in conversation with each other.
There are lots of ways of asking questions, describing problems, weighing
evidence or defining concepts. It takes a lot of mutual discussion and
challenge to find our way forward.
What is your vision of a better tech future and how can we move towards
that vision?
This is the hardest question to answer. Probably, there are lots of visions of a
better tech future, and the most important way ahead, of course, is to consult
people - I would say, especially children and young people as they have the
greatest stake in the future, and also especially those who are more
marginalised or under-represented or least heard. Then we must think, better
for whom? And how would we decide what’s better? Really, the question is
not so much, what’s a better tech future but, what role should tech play in a
better future for all.
Key Terms & Definitions collection and its use are connected
Some terms and definitions that are to broader social and political
important for understanding Public concerns, and how data-driven
Interest Technology include the systems can be designed more
following: equitably. (Emory University)
Job Titles
United States Digital Services: The
Data Scientist
United States Digital Service is a
Software Engineer
technology unit housed within the
Product Manager
Executive Office of the President of
Product Designer
the United States. It provides
UX Designer
consultation services to federal
Digital Designer & Accessibility
agencies on information technology.
Analyst
It seeks to improve and simplify
Research Engineer
digital service, and to improve federal
Deputy Director of Network
websites.
Services
Innovation Manager
18F: “18F is a digital services agency
Program Officer for Technology
within the Technology
Technical Director
Transformation Services department
Project Manager
of the General Services
Statewide Spatial Equity Analyst
Administration of the United States
Government. Their purpose is to
deliver digital services and
technology products.”
Afua Bruce
Author, The Tech That Comes Next;
Principal, ANB Advisory Group
“My vision for a better tech future is one where technology
truly centers equity and justice, enabling people to support
their communities, run their businesses, and explore their
interests.”
How does your role help tackle thorny tech & society issues?
In my current role, I support organizations across sectors to develop and sustain
public interest tech projects. Whether working with funders to design and run
various grant and investment programs, or with nonprofits to develop strong
technical and organizational strategies, or with the private sector to build
responsible tech products and resources, I support leaders in tackling tech and
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I started my career as a software engineer at IBM. After taking a leave of
absence from IBM to get my MBA, I joined the FBI as a Special Advisor and
held several leadership positions in various science and technology strategy
and program management roles. Being at the FBI showed me how
intertwined technology, policy, and society are; I knew I wanted to do more
work at that intersection. My career moves from the FBI, to the White House,
to a think tank, to a nonprofit, and now to leading a consulting firm have been
driven by my desire to influence technology
from various angles. My advice to people
wanting to work in similar positions is to
identify what matters most to you, and be
open to working on those issues in different
forms. Once I realized that interdisciplinary,
collaborative, and inclusive science and
technology in the public interest mattered
to me, I was open to new partnerships and
new places.
diversity of the world. I would also like to hear more voices from a wider
range of science and technology fields. Especially with the growth of
generative AI, responsible tech affects everything from how spacecraft is
designed to how new materials are discovered to how drugs are developed
to how roads are constructed; subject matter experts from all these
disciplines should be included in the responsible tech ecosystem. Finally, we
need more storytellers and artists to help tell what’s possible with more
responsible technology and to help people imagine a better future.
What is your vision of a better tech future and how can we move towards
that vision?
My vision for a better tech future is one where technology truly centers equity
and justice, enabling people to support their communities, run their
businesses, and explore their interests. Realizing that vision requires
collaboration across sectors, including changemakers, technologists, social
impact organizations, and funders, to improve tech design and development
processes. Additionally, I would like to see more work done on sustainable
business models and funding structures for responsible tech work; without
advancements in this area, I worry we will see a number of great short-term
changes and successes, but fewer long-term, large-scale impacts.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I think often about a story from Kurt Vonnegut. He was at the party of a
billionaire with the author Joseph Heller, and remarks that their host had made
more money in a single day than Heller had ever earned from
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 97
PUBLIC INTEREST TECHNOLOGY
Catch-22. And Heller responds, “Yes, but I have something he will never have
— enough.” When you work in tech the wealth that exists can be disorienting.
“Enough” is something different to everyone but truly knowing what is
enough allows you to build a career driven by your values. And looking back
on that truly is priceless.
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I think our responsible tech conversations often over-emphasize the
technology itself. We need to pay closer attention to the incentive structures
that drive its development.
In our recent SSIR piece, my collaborator Dr. Wilneida Negron and I framed it
this way: We need more models of mission-driven catalytic capital to support
tech companies committed to building a more just, equitable, and sustainable
private sector. We need values-driven capital to become exponentially more
active at the seed and startup stages,
to influence the ethos with which tech
companies are encouraged to grow.
Imagine, if this year’s YC crop of 400+
companies was inspired to create value
for public benefit, rather than simply
“build something that people want.”
The actions funders and impact investors
take in the next few years could help to
speed the type of transformation that’s
needed to ensure that the next
generation of Big Tech companies is
grounded in the public interest.
Oumou Ly
Senior Advisor for Technology and Ecosystem
Security, the White House
“My advice to others is to embrace complexity. Be curious,
push boundaries, and invest in relationships that challenge
and support your growth. Understanding how to
communicate across disciplines and find common ground
will make you an indispensable part of any team.”
Tell us about your career journey. How did your career grow, and what
advice would you give to others wanting to be in a similar position?
My first role in this space was in the U.S. Senate, where I was a member of
Leader Schumer's national security team. Part of my role was to advise the
Senator on nominations of senior national security leaders, including the
Chairman of the Joint Chiefs of Staff and the Secretaries of Defense, State,
and Homeland Security. I remember staffing the Senator for a meeting with
Secretary Ash Carter, who talked at length about technology, innovation,
cybersecurity and how they work together to create a holistic approach to
national security. It was one of the first times I had heard a leader so incisively
articulate a policy vision which drew on the interrelation of these traditionally
distinct policy domains. After that, I started to think about how I could build a
non-traditional, multidisciplinary career.
What is your vision of a better tech future and how can we move towards
that vision?
Today, we are making decisions that will determine how technology shapes
the future. My vision for a better tech future is one where innovation isn’t just
about creating efficiencies but about improving lives across the board—
whether through better access to healthcare, education, or economic
opportunity. In this future, technology enables everyone to uplevel their way
of life.
Audrey Tang
Senior Fellow, Project Liberty Institute
How does your role help tackle thorny tech & society issues?
As Taiwan’s first Digital Minister (2016-2024), I worked to bridge the gap
between technology and society. Through Presidential Hackathon, Ideathon,
Join.gov.tw, the Participation Officers network, and Alignment Assemblies,
we ensured that digital advancements enhanced democratic values and civic
engagement, leading Taiwan to top rankings in Asia on Internet Freedom
and Democracy indices. Now, as a Senior Fellow at the Project Liberty
Institute, I am extending this mission globally. My work focuses on shaping
ethical governance models for digital platforms that prioritize transparency,
user agency, and democratic participation.
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 102
PUBLIC INTEREST TECHNOLOGY
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
My career journey has been unconventional, driven by a passion for open
collaboration. I began coding at age 8 and left formal education at 14 to fully
engage in internet development. This path gave me a deep appreciation for
decentralized, collaborative efforts. A pivotal moment in my career was my
involvement with g0v (pronounced "gov zero"), a grassroots movement in
Taiwan focused on government transparency. This experience taught me the
importance of bridging the gap between technology and society to create
inclusive and accessible systems. Serving as Taiwan's Digital Minister, I
played a key role in shaping our response to COVID-19 and safeguarding
against cyber interference, grounded in civic participation and co-creation.
My advice: Stay curious and remain committed to the public interest. Don't
shy away from unconventional paths, as they often lead to the most impactful
innovations. Cultivate empathy and strive to
understand diverse perspectives. Always
remember that technology should empower
and bridge across communities, not divide
them.
What is your vision of a better tech future and how we can move towards
that vision?
My vision of a better tech future centers on "pro-social media," a
transformative approach that prioritizes bridging divides over amplifying
conflicts. Unlike current social media, which often exploits emotional
reactions for engagement, pro-social media would use algorithms that
promote constructive dialogue and highlight common ground, even among
differing viewpoints.
The Lawfare Institute / Brookings. NYU’s Center for Social Media and
Cyberlaw Podcast: The Lawfare Politics. CSMaP Newsletter: Learn
Podcast features discussions with about their latest work to strengthen
experts, policymakers, and opinion democracy by conducting rigorous
leaders at the nexus of national research, advancing evidence-based
security, law, and policy, including public policy, and training the next
cybersecurity and governance. generation of scholars.
Stéphane Duguin
CEO, CyberPeace Institute
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I worked for 25 years in law enforcement and 5 years in civil society. My
journey is investigating organised crime, especially cybercrime, and terrorist
use of tech, and engaging the digital policy and tech community to do so at
scale. I grew from investigator to senior manager, and I led the creation of
innovative operational capabilities to address new threats: The European
CyberCrime Center, The EU Internet Referral Unit, the European Innovation
Lab and now, the CyberPeace Institute.
Advice: Never underestimate the value of fieldwork, and how it can help your
journey towards systemic changes. It is one thing to theorise about
cybercrime, it is something else to investigate, arrest, interview, and bring a
cybercriminal to justice. It is one thing to discuss about victims, it is something
else to be called at 2 o’clock in the morning by someone who lost everything
because of a cyberattack.
What is your vision of a better tech future and how can we move towards
that vision?
A tech future which will not be about tech for the sake of tech, but about how
it supports an ambitious human agenda.
Tech Policy
Tech Policy
Job Titles
Office of the Secretary-General's
Director, US AI Governance
Envoy on Technology “The Office of
Tech Policy Manager
the UN Secretary-General’s Envoy on
Senior Counsel, Intellectual
Technology is dedicated to
Property and Technology
advancing the United Nations’ digital
Associate General Counsel,
cooperation agenda, ensuring that
Privacy Data Governance
technological advancements benefit
Policy Researcher
all of humanity while mitigating
Senior Policy Adviser, Data
associated risks. The office serves as
Policy Analyst: AI and
a bridge between the UN system,
Emerging Technology
member states, the tech community,
Policy Manager - AI
and other stakeholders, promoting
Governance
partnerships and harnessing
Policy Enforcement Manager,
technology for the Sustainable
Trust and Safety
Development Goals (SDGs).”
Product Policy Specialist
AI Policy Director -
See a full list of organizations:
Government Relations
https://alltechishuman.org/tech-
policy.
All Tech Is Human & Consulate All Tech Is Human & Consulate
General of Canada in New York. General of Canada in New York.
Responsible Tech Summit: Responsible Tech Summit: Shaping
Improving Digital Space: Held on Our Digital Future: “On Sept. 14,
May 20, 2023, All Tech Is Human 2023, at SVA Theatre, All Tech Is
collaborated with the Consul General Human, in collaboration with the
of Canada in New York to bring Consulate General of Canada New
together 120 leaders across civil York, held its annual Responsible
society, government, industry, and Tech Summit which, brought together
academia who are working toward over 280+ stakeholders from across
improving the health and vibrancy of civil society, academia, industry, and
digital spaces. These are the people government to discuss how to co-
and organizations focused on create a better tech future.”
reducing harms, expanding
education, and reimagining what an
ideal digital space aligned with the
human experience looks like.”
Eugenio V Garcia
Tech Diplomat, Ministry of Foreign Affairs of Brazil
How does your role help tackle thorny tech & society issues?
As a Tech Diplomat, my role is to build bridges to connect people and
promote Tech for Good. Technology should be beneficial for everyone,
What backgrounds or voices would you like to see more of in the Responsible
Tech ecosystem?
Global tech policymaking demands responsible strategies to prevent disturbing
scenarios, build commonly accepted rules and minimum standards, and foster
international cooperation to avoid strategic uncertainty. Predictability by means of
norm-setting is in everyone’s interest. Effective global governance means that
What is your vision of a better tech future and how can we move towards
that vision?
In my view, Tech Diplomats from developing countries can play an active role
in a number of areas: pushing for international cooperation; engaging in
global policymaking; supporting efforts to ensure responsible use of new
technologies; exchanging views and coordinating positions; promoting a
common vision for the future; or joining forces with other partners in cross-
region institutions.
June Okal
Senior Manager, Global Stakeholder
Engagement, Eastern and South Africa, ICANN
How did your career grow, and what advice would you give to others
wanting to be in a similar position?
I always wanted to have across-the-board experience in the tech industry in
order to appreciate the unique challenges and nuances of each. For instance,
I started work in tech policy at the equivalent of an ICT Department, but for
the Government of Kenya supporting in setting up ICT Standards to be
adopted, then worked at the copyright regulator where we ratified rules on
copyright exemptions for the visually challenged as well as introducing the
first Kenyan legal instrument on intermediary liability, then engaged at a
leading think tank KICTANet that seeks to catalyze ICT reforms through a
multistakeholder approach, to offering legal and regional client advisory at a
boutique and specialist TMT law firm, o setting up the office and helping
protect clients intellectual for one of the largest IP firms globally, shifting to
Google Interfacing with a user base of +1B, then to a passive infrastructure
telecommunications provider, then at ARTICLE 19 advocating for freedom of
expression and information with a focus on the Domain Name System, to
Meta Connectivity advocating for high speed connectivity for Africa’s people
and most recently at Harvard’s Berkman Klein Center for Internet and Society.
The transition at each point has been exciting, some more seamless than
others, but because the foundation is the same, it is simpler to build upon it.
What is your vision of a better tech future and how can we move towards
that vision?
A better leveled playing field across the global stage using tech as an enabler
with tangible socio-economic impact. Collaboration, collaboration,
collaboration. A friend Cecil Yongo recently tweeted (X’ed?) ‘ Privileged
Africans (like me) have a moral duty to do everything in their power to open
up opportunities for the brightest young Africans.’ and this goes beyond just
Africans, we need to create and welcome new voices in the space and go
against gatekeeper norms that have been rooted in the ecosystem. I deeply
believe in abundance and if the numbers on meaningful participation in tech
is anything to go by, there is still so much room for others to come in and take
up space. "If you want to go fast, go alone. If you want to go far, go together" -
African Proverb.
Johanna Weaver
Founding Director, Tech Policy Design Centre
My advice to those wanting to enter the field? Don't be
scared of tech; it's everywhere and it's already part of your
job - whether you know it or not. It’s no longer possible to
be a credible policy practitioner if you don't have tech policy
literacy.
How does your role help tackle thorny tech & society issues?
Our mission is to shape technology for the benefit of humanity. We believe
technology shapes society, but we must remember that people have the
power to shape technology. This includes through its technical design, but
also through the laws, policies, and standards that govern the development
and use of tech. By harnessing each of these, we can shape a future where
humans, the environment, and technology thrive.
What is your vision of a better tech future and how can we move towards
that vision?
My vision is a future in which technology makes our lives better, not just
easier. It is possible to shape a future in which people, the environment and
technology thrive. But we can't be passive, we need to take action today to
create this postive future tomorrow.
Moving Forward
Co-creating a Better Future
Over the next five years, All Tech Is Human will focus
intently on ensuring our tech future is designed collectively.
This realignment requires a paradigm shift in how society
designs, develops, and deploys technologies that deeply
impact people and our social fabric. Your voice is a
necessary part of this goal. This will allow for a better
approach to tackling thorny tech & society issues.
Where to start?
It can be overwhelming to understand what the Responsible Tech
ecosystem looks like and determine how to get more involved. Individuals
often get inspired by a book, a movie, an article, or a personal experience
as a catalyst to making a positive difference in our tech future; however,
they may struggle to understand how. The Responsible Tech Guide that
you are reading is designed to help guide you on the many ways that you
can grow in the Responsible Tech ecosystem.
Finding Community
It can oftentimes feel like a solitary pursuit for individuals who want to make
a difference, so it is essential to illuminate pathways into the ecosystem.
There are thousands of people and organizations committed to the
Responsible Tech movement. The Responsible Tech Guide shows you
numerous ways that you can build greater community through All Tech Is
Human’s activities, along with the hundreds of other incredible
organizations in the ecosystem.
UN
to consider its impact, we do
A
CH
ITY
not have an appropriate mix
of backgrounds involved in
the process, and we do not
N
have enough knowledge-
IO
CA
EE
AT
R
The top areas of expertise, which are diverse and overlapping, include
Responsible AI, Tech & Democracy, and Tech Policy.
Read more about All Tech Is Human’s vision to maintain a space that
depicts the diverse and global voices in Responsible Tech. Download the
Ecosystem Pulse Report here.
In our Slack community, you will find They are researchers, builders,
channels to discuss each of our 6 policymakers, industry
focus areas, location-specific professionals, organizers, and
channels for hubs around the world, educators sharing and executing
and career-focused channels to post our vision of building a better future
job openings and share resources. for all.
Civil Society
Ada Lovelace Institute, Accountable Tech, Alan Turing Institute, Algorithmic
Justice League, Berkman Klein Center, Center for Democracy & Technology,
Center for Humane Technology, Consumer Reports, Integrity Institute,
Encode Justice, Project Liberty, Thorn, World Economic Forum, World Wide
Web Foundation, and more.
Industry
Amazon, Discord, EY, Google, Hinge, IBM, Meta, Microsoft, OpenAI, Oracle,
Pinterest, Reddit, Spotify, TikTok, Vimeo, Yahoo, and more.
Academia
Columbia University, Cornell Tech, Georgetown, Harvard University, New
York University, Oxford University, Princeton University, Stanford University,
and more.
Journalists
Axios, Bloomberg, Fast Company, MIT Technology Review, New York Times,
Rest of World, Tech Policy Press, Washington Post, WIRED, and more.
Working Groups
Ali Feldhausen
Director of Career Development, All Tech Is Human
Talent Matchmaking
Mentorship Program
To date, nearly 1000 mentees have Join the waitlist for our 2025
gone through the mentorship Mentorship Program.
program since its inception. Our
mentorship program cohorts
ALL TECH IS HUMAN | RESPONSIBLE TECH GUIDE 2024 | 153
Learn More, Take Action
After you read the Responsible Tech Guide, here are ways
you can stay involved:
06 Join one of our seven open working groups, focused on our six key
areas of Responsible Tech, plus a general education group.
08 Are you growing your career in Responsible Tech? Join our talent
pool and use our Responsible Tech Job Board, along with our
career resources and services.
Acknowledgments
Thank You
If we thanked everyone who has contributed in some way to
the Responsible Tech Guide, we’d have to add an entirely
new report. The Responsible Tech Guide has been informed
by our current working groups (over 500 people), previous
reports that have involved over 1,000 people in their
creation, our mentorship program with over 1,500
participants, and the 10,000+ members in our Slack
community who are regularly sharing resources, insights,
and feedback that we synthesize into the guide.
We would also like to thank and acknowledge the support we receive from
Siegel Family Endowment, Patrick J. McGovern Foundation, the Future of
Online Trust & Safety Fund, Mozilla, Schmidt Futures, and Oak Foundation.
We have also previously received support from the Ford Foundation and
Project Liberty.
Our Responsible Tech Guide is an evolving process that builds off of previous
versions. The 2024 Responsible Tech Guide is the fifth edition, as the first
guide was released in September 2020. In that time, we’ve activated
hundreds of volunteers who have helped shape the version you’re reading
today.
All Tech Is Human has been an organization with big ambitions to develop a
better approach to tackling thorny tech & society issues. Thank you to
everyone who has believed in our mission and helped spread the word.
Your participation in our various activities helps create a stronger hub of
knowledge to the benefit of the entire community. Thank you.
Learn more about how we are tackling complex tech & society issues on
our website, or email us at hello@alltechishuman.org. You can also
submit press & media inquiries and download our press kit.