Pandemic is showing us we need safe and ethical AI more than ever | TheHill – The Hill

Machine-learning models are trained on human behavior and excel at highlighting predictable or normal behaviors and patterns. However, the sudden onset of a global pandemic caused a massive change in human behavior that by some accounts has caused automation to go into a tailspin, exposing fragilities in integrated systems we have come to rely upon.

The realization of the scale and scope of these vulnerabilities which affect operations ranging from inventory management to global supply chain logistics comes at a time when we need artificial intelligence (AI) more than ever. For example, AI technologies are enabling contact tracing applications that may help mitigate the spread of the coronavirus. And amidst widespread testing shortages, hospitals have started to use AI technologies to help diagnose COVID-19 patients.

Still, the expansion of AI in healthcare could at the same time lead to profound threats to privacy and civil liberties, among other concerns. Even when AI systems are relatively accurate, their implementation in complex social contexts can cause unintentional and unexpected problems, for example resulting in over-testing, which is inconvenient for patients and burdensome for resource-strapped healthcare facilities. The challenges associated with developing and implementing AI technologies responsibly calls for the adoption of a suite of practices, mechanisms, and policies from the outset.

A new report from the UC Berkeley Center for Long-Term Cybersecurity provides a timely overview of some of the approaches currently being used to roll out AI technologies responsibly. These range from monitoring and documentation techniques to standards and organizational structures that can be utilized at different stages of the AI development pipeline. The report includes three case studies that can serve as a guide for other AI stakeholders whether companies, research labs, or national governments facing decisions about how to facilitate responsible AI innovation during uncertain times.

The first case study explores Microsofts AI, Ethics and Effects in Engineering and Research (AETHER) Committee and highlights what it takes to integrate AI principles into a major technology company. It is well known that Googles attempt to establish an AI ethics board dissolved within a week, however the AETHER Committee originally launched in 2018 has comparatively flown under the radar despite some notable successes. AETHER established a mechanism within Microsoft that facilitates structured review of controversial AI use-cases, providing a pathway for executives and employees to flag concerns, develop recommendations, and create new company-wide policies.

For example, AETHERs deliberations helped inform Microsofts decision to reject a request from a California sheriffs department to install facial recognition technology in officers cars and body cameras. In another example, AETHERs Bias and Fairness working group helped develop an AI ethics checklist for engineers to use throughout the product development process. Other AETHER working groups have developed tools to help AI developers conduct threat-modeling and improve the explainability of black-box systems. An internal phone line called Ask AETHER enables any employee to flag an issue for consideration by the Committee.

The second case study explored in the CLTC report delves into OpenAIs experiment with the staged release of its AI language model, GPT-2, which can generate paragraphs of synthetic text on any topic. Rather than release the full model all at once, the research lab used a staged release, publishing progressively larger models over a nine-month period and using the time in between stages to explore potential societal and policy implications.

OpenAIs decision to release GPT-2 in stages was controversial in a field known for openness, but the company argued that slowing down the release of such a powerful, omni-use technology would help identify potential dangers in advance. The research labs decision jump-started a larger conversation about best practices and responsible publication norms, and other companies have since opted for more cautious and thoughtful release strategies.

Finally, the third case study discusses the role of the new OECD AI Policy Observatory, formally launched in February 2020 to serve as a platform to share and shape public policies for responsible, trustworthy and beneficial AI. In May 2019, the Organisation for Economic Co-operation and Development (OECD) achieved the notable feat of adopting the first intergovernmental standard on AI with the support of over 40 countries. Subsequent endorsements by the G20 and other partner countries have expanded the scope of the OECD AI Principles to much of the world. Launched this year, the Observatory is working to anchor the principles in evidence-based policy analysis and implementation recommendations while facilitating meaningful international coordination on the development and use of AI.

Together, the three case studies shine a light on what AI stakeholders are doing to move beyond declarations of AI principles to real-world, structural change. They demonstrate actions that depart from the status quo by altering business practices, research norms, and policy frameworks. At a time of global economic upheaval, such deliberate efforts could not be more critical.

Demand for AI technologies whether for pandemic response and recovery or countless other uses is unlikely to diminish, but open dialogue about how to use AI safely and ethically will help us avoid the trap of adopting technological solutions that cause more problems than they solve.

Jessica Cussins Newman is a research fellow at the UC Berkeley Center for Long-Term Cybersecurity, where she focuses on digital governance and the security implications of artificial intelligence. She is also an AI policy specialist with the Future of Life Institute and a research adviser with The Future Society. She has previously studied at Harvard University's Belfer Center,and has held research positions with Harvard's Program on Science, Technology & Society, the Institute for the Future, and the Center for Genetics and Society. She holds degrees from the Harvard Kennedy School and University of California, Berkeley. Follow her on Twitter@JessicaH_Newman.

Here is the original post:
Pandemic is showing us we need safe and ethical AI more than ever | TheHill - The Hill

Related Posts