With artificial intelligence becoming increasingly adopted into wider society and governance, our panelist discussed the challenges posed by large-scale deployment, and how the EU should implement the ‘AI Act’ to ensure both protection of citizens while also maintaining an environment for enterprise growth and technology innovation.
Protecting Business growth and technology innovation.
Throughout the event, challenges towards legislation and the important of ensuring flexibility were discussed. Each panelist raised concerns towards the existing broad definition of AI and highlighted how blanket regulation would stall new enterprise opportunity and technology development. Svenja Hahn MEP, warned of overregulation and the importance of setting clear and digestible guidelines. She emphasised this was particularly important to startups and SME’s so there may be room to grow without the limitations of burdensome red tape that could halt progress and contribute to brain drain.
AI is a broad reaching domain, that cannot be discussed as a homogeneous subject. Stefanie Valdés-Scott, Adobes Senior Manager for Government Relations in Europe, highlighted Adobes diverse use of AI from streamlining image processing to creating data driven insights for enterprise. Stefanie discussed how industry is concerned with the current definition of AI being too broad, and in some cases misapplied. MEP Karen Melchior emphasized this further, discussing how the potential dangers of technologies do not negate their benefits, comparing the broad regulation of AI to banning knives and hammers, which can either be used as weapons or tools.
Karen emphasised the importance of use, not the technology, requires legislation. She states that blanket regulation of AI is regressive highlighting “ If we make the process for companies to develop AI systems too burdensome and unpredictable within the EU, we risk hindering business development and innovation without actually providing better protection”. The panel is concerned with entire domains being labeled as high risk, such as Health and Education. Within the Healthcare sector, it was highlighted that AI is already being used to provide state-of-the-art diagnostic tools and a blanket ban of AI in this sector would limit new medical advancement that could save countless lives through early intervention.
When asked whether the AI act would help or hinder investment into AI startups in Europe, the panel agreed that it would encourage growth of new enterprises. Both MEP’s discussed how the AI act will ensure a safe environment for investors by allowing more trust through higher standard technologies and predictable regulations. Stefanie complimented this by stating Adobe, as most large tech companies, wish to work with start-ups and SME’s that are applicable globally and hold good reputations.
Preventing harm and discrimination.
Several concerns were raised by what negative impact AI could have. MEP Svenja Hahn states” Normally politicians tend to regulate something that has come up in recent years and in this case we want to set a solid framework for something that will emerge in the future”. MEP Karen Melchior echoed this, warning, a key challenged faced is the identification of gaps in legislation when many potential issues are still unknown. Karen discussed the importance of understanding methods AI use for decision making. She raised an example of AI used to differentiate wolves and dogs. Instead of differentiating between both animals, it based its decision on whether snow was in the background, which was the case with all the Wolf images. Karens concern is that AI used in areas such as social programs may be flawed in similar ways. She highlighted that AI may discriminate against welfare claims or remove children from family homes based on extraneous data that is not relevant or important.
Similar concerns were raised by the rise of biometric systems, where Svenja Hahn criticised the AI act for being too broad and open to interpretation. Svenja stated biometric system are not only open to potential abuse by human operators but are proven to be both discriminatory and ineffective. Further concerns by the panel included the negative impact AI may have on detecting Sex and Gender such as the impact of reactive street advertising and the potential for boarder guards who may use it to discriminate based on personal bias.
When questioned about ensuring transparency of AI development in regulation, the panel largely agreed that deeper insights into its engineering are not required. Stefanie stated that in many cases, full transparency may not be possible, but highlighted trust is essential. She emphasized the importance of commitment to codes of conduct when developing these systems, which are often beyond understanding to the general public. Svenja mirroring this statement highlighted how the most people do not understand the nature of aircraft design when boarding an airplane but still hold trust in these systems. Deep transparency into AI, such as individual lines of code would not be necessary as long as secure ethical frameworks ensured trust and when methods of decision making, and outputs are understood.
A key factor raised by each member of the panel was to ensure diversity in the development of AI. Stefanie spoke of Adobes current commitments, through establishment of an AI ethics committee that helps guide product development, and utilizing diverse groups to ensure expansive knowledge of what shape harmful content may take.