Demystifying the AI regulatory landscape
According to information published on the www.techradar.com page.
With the UK’s AI Safety Summit taking place at the start of November at Bletchley Park, hot on the heels of an Executive Order from President Biden on the same topic, the debate around AI safety and regulation has been growing ever louder and more complex.
Even before these recent events, the reality is that there was an increasingly intricate global AI regulatory framework taking shape with every country moving at their own speed. In contrast to long-term objectives, the need for greater simplicity and focus now is an imperative starting with three key areas that should be the industry’s first priorities: the AI models themselves, the data being consumed, and the ultimate outcomes these combinations are producing.
Through all three spheres, we must always hold accountability, reliability, impartiality, transparency, privacy, and security top of mind if we want any hope of navigating the complex landscape of AI regulation today.
Reaching clarity
Before we can begin to have a realistic discussion about ‘AI’, it’s important to be crystal clear about which technologies are actually being referred to. In light of the significant variation in AI and Machine Learning (ML) models, there is already a growing concern that these terms are mistakenly conflated. Given the record speed at which ChatGPT has been adopted, people are already misconstruing ChatGPT with AI or ML – the way we might use Google to refer to all search engines.
Regulators need to implement guidelines that help standardize the language around AI, which will help with understanding the model being used, and ultimately regulate risk parameters for these models. Otherwise, its potential to take the exact same data set, and draw wildly different conclusions based upon its biases—conscious or unconscious—that are ingrained from the outset. More importantly, without a clear understanding of the model, a business cannot determine if outputs from the platform fit within its own risk and ethics criteria.
In the automotive sector, well-defined autonomy levels for autonomous vehicles have been put in place, enabling car manufacturers to innovate within clearly delineated parameters. Given the wide spectrum encompassed by AI, ranging from ML data processing to generative AI, regulators have a unique opportunity to inject clarity into this complex domain.
While particular regulations pertaining to AI models may appear to be somewhat limited at present, it is crucial to factor in regulations that govern the ultimate outcomes of these models. For instance, an HR tool employing machine learning for job candidate screening might inadvertently expose a company to discrimination-related legal issues unless rigorous bias-mitigation measures are in place. Similarly, a machine learning tool adept at detecting personal data within images of passports, driver’s licenses, and credit cards must strictly adhere to data protection regulations.
Chief Information Security Officer for EMEA at Netskope.
The AI data ecosystem
Before regulatory action is taken to oversee the development and deployment of AI, it is judicious to examine how existing regulations might be extended to AI. These tools heavily rely on a dependable data supply chain. IT and security leaders are already grappling with the challenge of adhering to a slew of data-related legislation, including acronyms such as HIPAA, GLBA, COPPA, CCPA, and GDPR. Since the advent of GDPR in 2018, Chief Information Security Officers (CISOs) and IT leaders have been mandated to provide transparent insights into the data they collect, process, and store, along with specifying the purpose behind these data-handling processes. Furthermore, GDPR empowers individuals with the right to control the use of their data. Understandably, leaders are concerned about the potential impact of deploying AI and ML tools on their ability to comply with these existing regulatory requirements.
Both businesses and regulators are in pursuit of clarity. They seek to understand how existing regulations apply to AI tools and how any modifications might affect their status as data processors. AI companies are encouraged to exhibit transparency with customers, showcasing how their tools comply with existing regulations through partnership agreements and terms of service, particularly with regard to data collection, storage, processing, and the extent to which customers can exert control over these processes.
Ethical progress
In the absence of unambiguous regulatory guidelines in the AI landscape, the onus falls on technology leaders to champion self-regulation and ethical AI practices within their organizations. The objective is to ensure that AI technologies yield positive outcomes for society at large. Many companies have already released their own guiding principles for responsible AI use, and these consistently underscore the importance of accountability, reliability, impartiality, transparency, privacy, and security.
Technology leaders, if they have not already started, should embark on an evaluation of the ramifications of integrating AI into their products. It is advisable for companies to establish internal governance committees focused on AI ethics. These committees should assess the tools and their application within the organization, review processes, and devise strategies in anticipation of broader regulatory measures.
While the establishment of a regulatory body, akin to the International Atomic Energy Agency (IAEA) or the European Medicines Agency (EMA), was not a focus at the AI Safety Summit, it could prove instrumental in crafting a worldwide framework for AI regulation. Such an entity could foster standardization and delineate the criteria for ongoing evaluations of AI tools to ensure continued compliance as the models evolve and mature.
The path to an enlightened future
AI harbors the potential to revolutionize our lives, yet it must not come at the expense of the fundamental tenets underpinning data rights and privacy as we understand them today. Regulators must strike a fine balance that safeguards individuals without stifling innovation.
After the deliberations among government and industry leaders at Bletchley Park, my primary aspiration is to witness a heightened emphasis on transparency within the existing AI landscape. Instead of relying solely on goodwill and voluntary codes of conduct, AI companies should be compelled to furnish comprehensive disclosures regarding the models and technologies underpinning their tools. This approach would further empower businesses and customers to make well-informed decisions regarding adoption and enhance their autonomy over their data.
We’ve featured the best productivity tool.
Source: https://www.techradar.com/pro/demystifying-the-ai-regulatory-landscape
Post Comment