In 2022 StageZero conducted the first large-scale study into the adoption of artificial intelligence (AI) across the European continent. We conducted questionnaires with technologists such as data science and machine learning leads, engineers, and decision-makers across multiple industries and business functions to find out how they and their companies are implementing and benefiting from AI. The results were published in a comprehensive report free for download here.
Our report aimed to highlight surprising or unexpected aspects of the European AI ecosystem and its AI implementation. The principal surprise for us was the extent to which companies understand that regulatory compliance is absolutely essential.
As data regulation becomes more prevalent in Europe, regulatory compliance is flagged as a serious challenge for the majority of those who responded to our survey. While most regulations do not target AI explicitly, the regulations on privacy put AI development on their radar. This is because algorithms train on data, and sometimes that data can contain personal information or other regulated data.
As we create more algorithms we need more data, and the more sophisticated algorithms might require personally identifiable information (PII), so regulatory compliance is critical. The StageZero report shows that compliance with privacy regulations is of great concern to the majority of companies and is even preventing them from reaping the benefits from AI implementation.
When asked to evaluate different factors that prevent them from seeing value from their AI implementation, 70% of companies surveyed reported that regulatory compliance is concerning. Only 16% said it’s not an issue while 14% were unsure. In a separate question regarding GDPR compliance, the results were similar. 75% said GDPR is a challenge to seeing value from AI implementations, while 16% don’t think it’s a big problem, and 8% are unsure about it.
Read more: Enterprise AI adoption: Top challenges and solutions to overcome them
“I was surprised to see that only 16% of companies don’t consider regulatory compliance an issue.”, Dr. Magnus Westerlund, Principal Lecturer at Arcada University of Applied Sciences tells us. “Personal data is very specific so it’s not necessarily that all companies would even need to use data that would be directly or indirectly linked to an individual. It’s fairly easy nowadays to unlink or decouple data from people, so you can’t derive who’s done what. I imagine the majority of companies know how to do this, so in that sense, privacy shouldn’t be an issue. If the respondents are in mainstream industries, then it shouldn’t be an issue. Of course, on the other hand maybe the respondents are also thinking about other types of regulations. We’ve seen a lot of discussions around the proposed AI Act and so maybe they’re concerned about how their case will change if new legislation appears.”
Thomas Forss, CEO and co-founder of StageZero Technologies agrees and adds that “I would hope that the world could move towards having one joint regulation over time. I think that in the short term it’s probably impossible, but we start seeing for example in the USA, different regulations for privacy appearing in each state. Having 50 different regulations in the USA is a lot and if this trend holds for different areas, we’ll end up with over a hundred different regulations across the globe. Smaller companies just don’t have the resources to ensure that they’re following every one of those, so I believe it will be stifling competition if there are so many regulations and that’s a negative outcome of it.”
Read more: StageZero's guide and checklist to privacy and AI and How to ensure data compliance in AI development | StageZero checklist
While privacy is the main safety concern, the report also defines cybersecurity and physical safety as important factors throughout many business functions, and AI is no exception to that.
The report states that cybersecurity preventing value from AI implementation is a moderate concern for enterprises in Europe, with 67% reporting being either a bit, somewhat or a lot concerned. In AI training and maintenance, the general preference is to use real world data over synthetic data. It is mandatory that this data be protected and secured, and enterprises understand that compliance is critical, which is why they understand that cybersecurity is a crucial function.
Physical safety is also seen as a factor that limits enterprises from seeing value from their AI implementations, perhaps on a lesser scale than privacy and cybersecurity. About 22% state physical safety prevents them somewhat from seeing value. One reason for this could be that only 5% of the respondents work in Automotive and Transportation sectors, which is an industry handling projects such as self-driving vehicles which therefore is exposed to stricter regulations regarding physical safety.
Based on the report, the challenges pertaining to regulation will continue to grow in relevance to companies who develop and or implement AI models. The intensifying impact of data regulations will continue to push enterprises to embrace more secure ways of handling data. At the same time, restricted by current and upcoming regional data and AI regulations, companies might go the easier route and limit their data sourcing efforts or collect only local data, which could result in bias and flawed results.
Read more: AI and regional data privacy laws: key aspects and comparison
Enterprises have a plethora of concerns regarding privacy regulations and ethics, and here we will explore the main ones relating to the sensitive information of individuals.
AI algorithms rely on vast quantities of data for learning and making predictions, but the collection of this data can lead to the violation of privacy regulations. One concern is that PII might be collected without a person’s consent. For instance, some companies might collect information about people’s behavior online without the person’s knowledge or permission. This data includes information such as the person’s browsing history, location data, and other such sensitive information.
Another concern is that PII could be used in ways that were not disclosed and to which the person did not consent. Companies could in theory collect data for one stated purpose such as targeting their advertising efforts, and later use that same data for other purposes such as curating a profile about the individual and their preferences, interests, and behaviors.
Additionally, there’s a prevalent concern that some PII might be collected and then used in ways which could perpetuate discriminatory practices. If an algorithm is trained on biased data, it could perpetuate or even amplify existing biases in decision-making processes, which could result in unfair treatment of different demographics such as minority groups.
To address these concerns, enterprises are aware that it’s vital that they work hard to promote transparency around their data collection and usage, to obtain informed consent from individuals, and to ensure that the data is used exclusively for the purposes for which it was collected.
AI algorithms can result in discriminatory decision-making due in part to biased training data. For instance, if a facial recognition algorithm were trained on a dataset of predominantly white-skinned faces, its performance will likely be poor when handling faces with other skin colors. This can result in inaccuracies and bad decisions. They can also have sufficient training data to identify minority groups but be biased in terms assumptions or stereotypes in the information they associate with such groups, which again can lead to discriminatory practices.
These biases have important implications to privacy compliance. If an algorithm is used for making decisions that impact different individuals and the algorithm bases its decision on stereotypes, this can have significant impact on those individuals concerned. For example, if an algorithm is used to make decisions regarding credit ratings, hiring for new employment, or criminal justice proceedings, a biased decision can result in serious harm. If the decisions are based on sensitive information such as gender, race, or age, then this will constitute most likely a serious breach of privacy compliance.
In order to mitigate this risk, enterprises should evaluate their AI algorithms carefully, identify and remove biases. They need to ensure that the algorithms are designed and trained with impartiality, tested on real-world use cases, and monitored for potential biases that might crop up. Transparency helps a lot when it comes to helping individuals and enterprises understand how the decisions were made, and to hold the enterprises accountable should any discriminatory practices happen.
Read more: Data diversity and why it is important for your AI models
For obvious reasons, facial recognition is a privacy minefield. Typical use cases ripe for abuse are when facial recognition is used for surveillance purposes, and understandably many people do not want their whereabouts and activities to be tracked or their faces to be recorded without their explicit consent.
Since facial recognition involves collection and processing personal information like facial images and biometric data, the information must be collected and processed with the full understanding and consent of each individual concerned.
Such technology can be particularly concerning if this technology reaches the wrong hands. There is potential for facial recognition to be used to target individuals based on their religion, political beliefs, or other socio-political aspects, which is why privacy regulation is tight in this field. Since the potential for false-positives is still quite high with this technology, there’s a danger that the wrong individual would be identified incorrectly and then wrongly accused or falsely charged of criminal activities while innocent.
Biased algorithms can also result in inaccurate or plain discriminatory targeting. When it’s used for law-enforcement or other high-stake situations this holds serious potential to produce catastrophic outcomes. This is again why transparency and accountability are key. A careful evaluation and implementation of the appropriate safeguards and ensuring that all individuals have given consent will help protect their right to privacy. It’s also advisable to check your local jurisdiction since many are introducing regulations to restrict or outright ban facial recognition – especially for law-enforcement purposes.
Transparency is a key topic linked to multiple areas of improved AI ethics, and privacy is one of those. When it’s hard to interpret the systems behind the AI outcomes, it leads to a lack of clarity in the decision-making processes. This is also a concern in other data-driven technologies.
When algorithms are used to make decisions that impact people (like credit scoring or hiring for new positions) then it’s crucial that the individuals impacted understand how the decision was made. One of the challenges with AI is that algorithms can be difficult to explain. This is especially the case with deep learning algorithms which combine millions of parameters with complex mathematical modeling. The complexity results in a lack of understanding around the decision-making processes, which makes it difficult for individuals to challenge the resulting decisions if they turn out to be unfair or discriminatory.
When individuals don’t understand how or why their data is being used, it can lead to a lack of trust in the enterprise or institution and can discourage people from sharing their personal data freely in the future. To address peoples’ concerns, we advise enterprises to supply clear explanations to their AI decision-making processes. This could involve provision of information such as what data is used to inform decision-making, and how the decisions can be challenged or appealed. We also recommend providing individuals with clear control over their data. Offer them the option to opt out of data collection, and explain how they can request that their information be deleted. Comply with local legislation. In the future we predict to see an increase of regulatory frameworks guiding transparency and explainability in AI and data-driven technologies.
AI privacy and cybersecurity are two closely related concerns, since data breaches and other such cybersecurity incidents can result in the theft and exposure of sensitive personal information. AI systems can be vulnerable to cyber-attacks, and unauthorized access to data is already a serious breach of compliance. The resulting actions can be catastrophic.
When hackers access databases of personal information, they can use them for many malicious purposes. These include financial fraud and identity theft, which can result in significant harm to individuals in terms of financial loss, reputational damage, and of course emotional distress. There’s also the potential for government to access the information and use it for surveillance purposes and other types of monitoring. In some cases, governments may use cybersecurity threats to justify the collection and monitoring of individuals – for example, with respect to their online activities. This is a violation of privacy and other civic freedoms, especially if the surveillance is carried out without transparency or oversight.
Enterprises must take cybersecurity seriously. The primary action to protect privacy concerns relating to cybersecurity is to ensure proper storage and protection of your data. When data is not securely stored, it is vulnerable to cyber-attacks and other types of unauthorized access. This is especially problematic if the information is sensitive. Proper protection and storage involve encryption, implementing access controls, regular monitoring and auditing of all systems to identify and mitigate vulnerabilities, and other security measures. In the future we expect to see an increase in regulatory frameworks to hold enterprises accountable for any privacy violations that might occur. Meanwhile we recommend active compliance with current guidance and regulations, and clear communication to individuals concerning the use and protection of their data.
Overall, the challenges pertaining to regulation will continue to grow in relevance to companies who develop and or implement AI models, and will push enterprises implement more secure methods for data handling. This is challenging for the enterprise but will help them to adhere to compliance, which is crucial to the individuals’ whose data is at use.