Today’s most popular form of artificial intelligence (AI) – machine learning – is often called a black box because humans can’t see inside to understand its decisions. Depending on a new hearing before the national communications regulator, Canadians might be shut out even more, left in the dark because of proposed confidentiality rules. The outcome will test if new AI systems can be accountable to democratic oversight via the traditional instruments of public law.
The Canadian Radio-television and Telecommunications Commission (CRTC) is a quasi-judicial administrative agency that relies on open public proceedings to develop a public evidentiary record based on which it ultimately makes its decisions. CRTC decisions are based on the public record it has developed pursuant to a notice of consultation and can be appealed to the Federal Court of Appeal (FCA) and/or the Governor in Council (GiC), depending on the basis of the appeal (i.e. error in law vs. policy respectively).
Last summer Bell Canada requested to use a confidential machine learning system to filter fraudulent and scam telephone calls. The CRTC approved the trial application of the system and last month Bell requested to put the system in place in perpetuity. We originally participated because we wanted to develop better public oversight of AI. Thanks to the unusual level of secrecy by Bell and failure to respond to our requests for information on the public record, we have become increasingly concerned about the privacy and security implications of Bell’s system for Canadians.
Bell Canada’s Application has been heavily redacted, providing little information about the type of information it is harvesting from Canadians to train its AI system and classify calls. Such secrecy, Bell argues, is needed to avoiding tipping off “bad actors”. We object. The case sets a worrisome precedent for the first known case of AI in Canada’s regulated communication systems. AI is hard to explain already without the added secrecy.
Now, instead of an open public debate about benefits and risks of Bell’s system, what the CRTC hearing has become is a debate about the future of public evidence and public participation in public regulatory proceedings. Bell has presented interveners with what it calls a “compromise”, whereby “legitimate” interveners have the option to sign a non-disclosure agreement (NDA) with Bell if they want to view secret evidence it has provided to the CRTC.
With so little information about basic elements of the system on the public record, intervenors who do not sign the NDA on principle will have no idea how the system is working, what types of information it is harvesting, its potential to block legitimate calls, and risks to the privacy of Canadians. The catch is that signing the Bell NDA for us would limit the publicity of a public proceeding. If we signed Bell’s NDA to see evidence it has presented to the CRTC, we would not be able to discuss potential risks on the public record and inform the public as we do here. From a legal process perspective, it is not clear how decisions based on secret information in the first instance by the CRTC can be appealed later in the open court and/or via the open Governor in Council (GiC) appeal mechanisms the Parliament has established to ensure accountability of regulatory decisions.
We recently filed our opposition to the NDA system Bell has proposed. CRTC has so far been in agreement with Bell’s proposed approach, but we hope it changes it mind. We think our intervention helps reveal the fundamental issue at stake in this proceeding. It explains how the use of NDAs in public proceedings is inconsistent with basic objectives of the Parliament in requiring open public proceedings and evidence-based administrative decision making. If AI systems can only be discussed in confidence, then they are out of step with the principles of democratic governance and will increasingly become unaccountable to the collective wishes of humans as reflected in our public laws and institutions that enforce them.