Make a submission: Published response
Published name
Make a general comment
I had the benefit of collating and contributing to an earlier submission of 26 July by Dr Cebulla, myself, Dr Johnsam, Professor Leiman and Dr Scheibner and I adopt its comments in particular its framing comments
Should different approaches apply to public and private sector use of AI technologies? If so, how should the approaches differ?
As a matter of principle, it is hard to conceive why such a distinction should be drawn. those sectors do not exist in isolation from each other and are often engaged in collaborative or contractual relationships, so disparate coverage of regulation could then get highly complex or be gamed, to no effect. Perhaps government can attempt to demonstrate best practice in its approach, that is about as far as I would take it.
In what circumstances are generic solutions to the risks of AI most valuable? And in what circumstances are technology-specific solutions better? Please provide some examples.
Given the thesis I have adopted from the outset, that the real risks are not technology risks per se, my position is that the generic approaches to these fundamental underlying issues are the most important issues to address. This is not straightforward as these are often handled by large pre-existing regulatory systems with significant entrenched positions and inertia around how they conceive of and regulate those matters. This necessarily entails a careful and not rapid process of consideration of the potential need for change. Again as indicated above, while I believe that it is these fundamentally human issues that are important, the impact of technology on those issues and socio-technical behaviour must be acknowledged, and this may require change that is influenced by shifts in the affordance of our tools, as that can induce or permit behavioural shifts. Conceptually such approaches are more likely to have some embedding in the form of legislation or flexible equitable and human rights perspectives implemented at the level of decision makers and courts.
There may be some areas where technology specific solutions are better, but from the frame of the analysis I have suggested these are inherently less interesting. They will need to sit within hard/soft regulatory mechanisms that are lower order and amenable to faster change as the technology specific issues shift with specific changes in the technology itself. This suggests change at the level of industry codes, co-regulation or hard regulation (as opposed to legislation per se).
Do you support a risk-based approach for addressing potential AI risks? If not, is there a better approach?
A risk based approach has a lot of superficial appeal and we can see international examples moving down this path. However the reality is more complex, in that risk ratings occur in a context, and if that context shifts, then the same example that is initially rated as low risk may become high risk, or vice versa. Values and value based analysis are also varied and fluid depending on the context.
Of course there are a variety of other factors to consider here including the classic Collingridge dilemma, in terms of the timing of any intervention, the degree of uncertainty in outcome, and the dangers of operational entrenchment vs over regulation.
One might also question harder the segmentation attempted in the discussion paper into economy wide vs application specific approaches, as everything is interconnected.
To consider a specific example, look at Low risk classifications for email filtering tools. At first blush this might seem fine, but subtle bias could lock out exchanges and manipulate with potentially significant effects, so query the risk rating shadow. Consider current complaints about ‘shadow blocking’ on twitter/X.
Generally people are not great at risk assessments, and we tend to exhibit both over and under reaction. Naturally this itself is a generalisation. It is important though that we are not overly distracted by the bright bouncing ball of the next tech challenge coming over the horizon, to the neglect of basic human issues, policy reform, and more obvious immediate existential threats such as climate change.
Should a risk-based approach for responsible AI be a voluntary or self-regulation tool or be mandated through regulation? And should it apply to:
It makes business and operational sense for organisations adopting AI to do so in a responsible way to manage their impact on others and avoid the obvious downside risks that may flow from irresponsible or reckless implementation. This apples even without specific AI focussed regulation as there are many laws that may be violated by poor implementation, and even if laws are not broken there may be significant loss of social licence to operate that will impact viability and reputation.
To the extent that mandatory regulation is adopted (either pre-existing laws require modification to better capture AI related risks or to the extent that targeted new laws are introduced), then it makes little sense to segment the application of it as between public and private organisations. Why? Because those sectors do not exist in isolation from each other and are often engaged in collaborative or contractual relationships, so disparate coverage of regulation could then get highly complex or be gamed, to no effect. Indeed as a matter of principle, it is hard to conceive why such a distinction should be drawn anyway.
As to the question of possible discrimination as between developers or deployers, again that is a hard bifurcation to justify. Organisations might be both. Developers might prefer not to be regulated and make some argument that they are not in control of the context of deployment and therefore should have some safe harbour to develop generic tools that maybe of benefit in some applications not withstanding their potential for problematic deployment elsewhere. But on the other hand many key elements – if not all – will necessarily be baked in at the design and development stage and therefore a parallel argument might be raised by deployers. Best to have everyone who has been involved potentially in the frame.
However of course there might be nuances in coverage and responsibility on the facts of particular cases, either through the application of general legal principles for tortious or other liability, or the application of specific legislation. On the topic of specific legislation and liability for design and deployment, it might be interesting to consider a parallel form of the Design and Distribution Obligations that now apply to financial services products under the Corporations Act.
Still, a major issue affecting any more innovative ideas around regulatory frames or design is whether Australia is in a position to take such a path if it is at odds with international approaches in a context where the vast majority of design and deployment will be offshore. This is not an argument for masterly inactivity, it is simply tabling realpolitik. It is interesting to reflect on how government regulation was influenced and to some degree crimped by the actions of large digital platforms in the context of the News media bargaining code.