What is RightsCon?
RightsCon is an annual event organized by Access Now. It gathers business leaders, technologists, activists, human rights experts, and government representatives to discuss challenging issues at the intersection of human rights and technology. I had the opportunity to attend this year's edition in Toronto where many sessions around AI were organized. I participated in a panel about bias in machine learning. This panel also featured Steve Crown, VP & Deputy General Counsel, Microsoft Corporation; Tara Denham, Director, Democracy Unit, Office of Human Rights, Freedoms and Inclusion, Global Affairs Canada, Government of Canada; and Sherif Elsayed-Ali, Director of Global Issues and Research, Amnesty International.
In the following, I summarize some of the interesting sessions I attended.
John C. Havens on fostering trust and empathy in the age of AI
John C. Havens is currently leading the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He spent almost his entire presentation talking about the GDP, this metric invented in the 40’s and which is still commonly used to measure a country’s development and wealth. He talked about how the GDP fails to measure key aspects such as caregiving, which essentially makes a lot of women invisible to the metric. He also talked about Bhutan’s famous gross national happiness index and referenced Bobby Kennedy’s criticism of GDP’s blind spots. This led him to his main point which was: how can we teach AI our values if we don’t even know what they are? On the same topic, I also recommend reading this nice piece by Henry Kissinger in the Atlantic where he writes:
If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?”
Ally Skills Workshop
To my surprise, there were many sessions about diversity in tech. The few that I had the chance to attend were great and this workshop was one of them. It was hosted by Leigh Honeywell, CEO of Tall Poppy. The slides for the workshop and other materials are freely available here under the section ‘Materials’. This workshop was fantastic and I learnt a lot. I encourage you to look at the slides and the handout, they give very useful tips on the kind of vocabulary that is appropriate, how to react in certain situations, etc.
Fireside Chat with Reddit CEO Steve Huffman and GC Melissa Tidwell
This chat was mostly about content moderation in Reddit. It was very interesting. They first talked about challenges in defining the moderation rules. One thing that seemed to work out well was to adopt very vague content policies so that it would give them room to legislate. What happened when they tried to be more specific was that some users would constantly poke at the line they drew to try and find loopholes, which was exhausting. They also talked about the importance to design policies with enforcement at scale in mind since they work with an army of moderators, and the importance to weight each decision as not to set dangerous precedents. Steve Huffman is regularly contacted by Reddit users about content matters and he seems to take it into account a lot and this influences quite a few decisions they make regarding content.
About hate speech, their approach is to separate belief from behavior, meaning that they moderate when people incite violence or hate but they won’t necessarily try to hide people’s racist opinions. Their motivation is to show the world as it is and they believe that banning people or communities would just make things worse. Huffman gave the example of Charlottesville, NC and said that his first reflex after seeing what happened was to want to ban the entire community from Reddit but decided against it because they think it is important to show the world as it is. There was a poignant intervention from somebody in the audience who said that this wasn’t good enough and she referenced the recent Toronto tragedy where someone active within the ‘Incels’ community murdered women of her age, ethnicity, and who went to the same university as her. Huffman just answered that they ban people once they exhibit violent or violence-enticing behavior but they can't do much before such line is crossed.
Mobilising the Might of Rights: A Human Rights Based Approach to AI / A Canadian position on AI and Human Rights: towards policies that promote and protect human rights
During these sessions, a lot was said about Canada’s position on AI and human rights. The Canadian government has been running a consultation about the strategy the country should adopt towards policy. A few points that the government is actively thinking about are:
We are often not very good at anticipating the effect of new technologies so what would be our blind spots regarding AI? What can we learn from previous experience with technologies that “disrupted” human rights?
We need a balance between economic prerogatives and protection, meaning that the government is seeking a tradeoff between policy-making and letting AI businesses flourish without too many obstacles. One of the key components here is data and privacy. The government understands the need for data in AI but also wants to protect its citizens.
Ownership of data: we mostly talk about ownership at the individual level but what about groups? Shouldn’t groups have a say on how data concerning their group is used and not only their personal data since the group-based decisions will affect them down the line?
Transparency: what degree of transparency over algorithms and models can we expect?
Freedom of expression: what to do when freedom of expression turns into a weapon to marginalize certain groups?
Equality/bias: can we set best practices to deal with these issues?
(En)countering hate speech: Let's diversify our solutions
This session was about methods to counter hate speech on social media and I decided to attend it for my own education. Some interesting initiatives:
“Mirrors of racism”: this campaign was led in Brazil. The idea was to put racist tweets on giant billboards located in the neighborhood of the person who posted the tweet. This way, instead of having to deal with an ‘online mob’-which is often inefficient-the person would have to deal with their neighbors, friends, family, and be confronted to what they said. This campaign proved useful in many cases where people ended up apologizing for their tweets.
“Seriously?”: this is a French platform that was developed to help people and especially young people deal with hateful speech on social media. They provide facts to counter opinions, expert advice, and a community that can help come up with the best response.
One strategy that proved to be very powerful on Twitter was to use images. I can’t remember who did this unfortunately, but one organization countered some hateful hashtags or Tweets by turning them into something positive and adding an image. One example that was given was about a hateful tweet against lesbian couples (can’t remember what it said), the sentence was modified into a positive statement and added to an image of two women holding hands and smiling, and that was a great counter-tweet. Reminding people that they are talking about real people who have names, faces, and histories helps de-anonymize things and counter hate.
RightsCon was a great conference gathering activists, governments, and companies. Unfortunately, not so many technical people were there and during my panel, policy-makers expressed the interest and need for places where they could meet more researchers and companies to discuss these issues and better understand the underlying difficulties of training AI.