Gendered Innovations in Machine Learning and Robotics, by Laila Arain (17) Teens in AI
On Tuesday, February 19th, I was lucky enough to attend a lecture by Professor Londa Schiebinger about “Gendered Innovations in Machine Learning and Robotics” at Cambridge University. The talk was fascinating and highly thought-provoking as it gave an insight into the efforts being made by academics and researchers all over the world to solve the problem of bias in technology.
Professor Schiebinger teaches History and Politics at Stanford University in California and has written many books on the topics of feminism and gender with a specific focus on their effects on technology. As the classic Silicon Valley tagline goes: “We’ll fix it”, Professor Schiebinger began her speech with three things which must be fixed in the tech industry, for the problem of gender bias to be solved: Fix the women, fix the institutions, and fix the knowledge. If these three fixes would be made, the world of engineering and computer science could drastically change. She continued to talk of a few true case studies where gender has come into play such as when deciding what advertisements to show consumers or how a verb should be translated into English from a gender-neutral language. Professor Schiebinger also explored some solutions to the problems faced in robotics. She gave examples of robots such as “Valkyrie” — the NASA robot which is sent on missions but also whose mission it is to challenge gender stereotypes — and “Pepper” — the Japanese robot whose anatomy and voice were called into question because of their nuances to gender. Yet, Professor Schiebinger concluded that the only way advancements could be made in this rapidly changing world, is when interdisciplinary teams collaborate to produce high performing but meaningful results.
Shortly after, Professor Gina Neff of Oxford University gave a response — which further fortified the conclusions that Professor Schiebinger made, but also presented some insightful provocations. Professor Neff mentioned the growing use of everyday devices but contrasted it with the growing gulf between people who design and deploy the technology versus those who use it. This comment was particularly striking, as the user should always be in mind when designing a product — whether it be the humble chrome extension or new portable technology. Furthermore, Professor Neff’s main line of argument was that to fix this gender issue we must come together and share knowledge about our own cultures, gender, and ethnicities, to diversify the data set. As with the case study of IBM’s Watson Oncologist, if one feeds Artificial Intelligence data about common diagnoses in Manhattan’s Upper East Side, the range of illnesses may not be as diverse as if it were collected across a whole country. Therefore, knowledge is the key to solving these gender problems.
Overall, the lecture was interesting and answered many questions about the problems with assigning gender to technology. The one take-home message is that technology should not be feared. When fed the right data it can only enhance human life and it is very exciting to see what the future holds.
Visit to find out more:
By Laila Arain (17), Teens in AI