On Monday, the 3rdof September 2018, after just having finished #TeensInAI2018 Accelerator, the group was invited to the BBC New Broadcasting House at Oxford Circus for the BBC Conference, ‘AI, Society and the Media- How Can We Flourish?’.
Ali Shah, the BBC’s Head of Emerging Technology and Strategic Direction, began the day with an introduction and overview of the agenda.
Matthew Postgate, the BBC’s Chief Technology and Product Officer then opened with a talk on “how should data be used?”, explaining that today people feel neither responsible nor in control of their own data, and that this needs to be addressed. He highlighted that there is currently no collective, national conversation about how data can be used for societal benefit whilst at the same time respecting privacy. Data is still held within large companies where its processing is most easily done, but this results in people feeling disempowered. In conclusion, Matthew believes that the direction of technology is moving towards the good of society, where everyone can flourish.
Dr. Adrian Weller, a Senior Research Fellow in machine learning at the University of Cambridgeand Programme Director for AI at the Alan Turing Institute, discussed deep learning and its limitations, and shared real world examples. In his eyes, we need three things in order to flourish as a society: algorithms that won’t discriminate, protection of privacy, and transparency. Also, as we strive to make AI better, this will cause us to reflect on ourselves and our own weaknesses.
Been Kim, a Research Scientist at Google Brainthen spoke about “Interpretable Machine Learning”, explaining how machine learning can learn from data as a dog learns tricks. A neural network is “many, many numbers doing many, many mathematical calculations”. Despite this complexity and computing power, neural networks need to work in partnership with humans to leverage their respective skills and knowledge. “To use machine learning responsibility, we want to ensure that: our values are aligned, and our knowledge is reflected”. In conclusion, how can we improve our interpreting ability? By understanding the data (gaining better insights), and building an inherently interpretable machine model (build an interpreter that can speak both languages).
Speaker, AI Researcher and Author Jonnie Pennthen challenged to audience to “not only [ask ourselves] ‘how will AI change society?’, but also ‘how will society change AI?’”.
Kate Coughlan, the BBC’s Head of Audience Planning spoke about “how are people feeling about AI?”, saying that in a BBC survey 85% of the public had heard of AI, but only 10% felt able to influence its development. Kate also spoken about the public feeling disempowered and passive, and that passivity creates negative outcomes and that control and autonomy are core human needs. Kate finished by quoting an anonymous woman from one of the surveys, “even the smallest voice in the wilderness can have some impact”.
Mary Hockaday, the BBC’s Controller of World Service English, discussed how AI can unfortunately enable fake audio and video (“deepfakes”), and can create fake viral campaigns, such as the fake Starbucks campaign about African-Americans receiving free drinks. Mary also expressed concerns about AI perpetuating social prejudices- “technology is only as good as the people who use it”. She also spoke about the real world value of AI and Big Data, giving the example of using the weather forecast to prevent cholera spread in Yemen, and finished by saying “the more the phrase ‘fake news’ is used, the less confidence people have in journalism”- “the trust between us must not be eroded”.
Ryan Fox, COO of New Knowledgeprovided further perspective and context on fake news, explaining four categories: 1) ‘Misinformation’- not necessarily with malicious intent; 2) ‘Disinformation’- using social media to amplify it, and make it seem to come from a credible source; 3) ‘Weaponised Truth’- lie on things that are mostly true; and 4) ‘Everyday Manipulation’- which is really computational information warfare, such as faking social engagement by manufacturing a crowd to try and influence action.
After Lunch, there was a talk by Cassian Harrison, Channel Editor for BBC Four. He discussed his AI new program (BBC 4.1), in addition to scene detection, subtitle analysis, visual energy and machine learning.
Dihaland Cavediscussed the abnormality of the essence of AI- falling somewhere in between the liminality we create between living and non-living. Made from inanimate objects, but obtains ‘living’ qualities such as ability to speak, it makes us feel uncertain, like we are being deceived. They also spoke about AI having the ability to either create a utopia, our equally a dystopia. AI has the ability to impact: Life, Time, Desire and Power. ‘Life’ means either medical immortality, or losing our humanity while trying to achieve it. ‘Time’, while either freeing us from the confines of work, can also mean fear of obsolesces. Similarly with ‘desire’- ultimate pleasure or monstrosity invading our homes, and with ‘power’- enforcing things on others or fear of being dominated. Each utopia has the power to collapse into its own parallel dystopia.
Professor Mary Aiken, Cyberpsychologist, told us that 16% of 3–4 year olds have their own tablet device. She discussed ‘dark Peppa Pig’, a recent epidemic where young children accidentally watched very disturbing fake versions of ‘Peppa Pig’, planted for that reason on YouTube. Mary also mentioned how in her words, AI in itself is neither good nor bad, it’s used for either good or bad reasons by humans, and that “we’re exposing ourselves to weapons of mass destruction”, and we’re “walking into it with our eyes closed”. For instance, someone has created ‘Norman’, the world’s first psychopathic AI, blind to the possible incorrigible consequences.
Originally published at Acorn Aspirations.