BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Ethics Really Come Down To Security

Forbes Technology Council

CTO and Executive Vice President at NXP Semiconductors, a leader in Automotive, IoT, Industry 4.0, Mobile and Connectivity technologies.

It's expected that there will be 75 billion smart connected devices in our homes and offices by 2025, and many of them will have added capacity to sense, process and make decisions without first checking with the cloud — or with us. If we're going to rely on them to take more active and responsible roles in our lives, we must be able to trust that they're not only ethical but that the AI and the machine learning that underpins them operate safely and securely.

Already, the U.S., EU and other countries have started working on laws and regulations focused on the impact of AI on end users. A number of tech companies and other organizations (including the Vatican) are also collaborating to develop ethical codes of conduct for AI built upon the key principles, which include privacy, transparency and fairness.

However, the process required to make ethical AI safe and secure requires more than the coding of virtuous machines. As is the case in multiple industries, AI applications and devices need vigorous structure and support to be certain that the safest and most ethical decisions are being made, but they also require physical fail-safe measures in case things go wrong.

With the Internet of Things (IoT) growing into a robust ecosystem in which AI is quickly becoming an important component of edge computing, this is a pressing issue. According to an estimate by Deloitte, more than 750 million AI chips were sold in 2020. Deloitte also predicts that by 2024, sales will increase to well over 1.5 billion chips. AI chips' processing power is significantly increasing and becoming a key part of smartphones, thermostats, security cameras, doorbells and speakers, which provides the opportunity for edge devices to grow smarter over time through machine learning and a reduced dependency on the internet to provide AI/ML functionality.

Innovating trustworthy AI/ML depends on the design, development and distribution of AI systems that learn from and work collaboratively with humans in a comprehensive and meaningful fashion. It's critical for security and privacy to be considered at the start of any new technology's architecture. They cannot be properly included as an afterthought; the absolute highest required level of security and protection of data must be incorporated in both hardware and software, which will ensure that it is already configured into all steps of the development and supply chain — beginning with design all the way through to the technology's business and utilization model.

The Charter of Trust initiative for IoT cybersecurity (of which we're a partner) has also provided excellent guidelines for a risk-based methodology and verification that should be incorporated as core requirements throughout that supply chain.

After we identify the core principles that will govern AI development, we must then determine how to ensure these ethical AI systems are not compromised. Machine learning can monitor data and pinpoint anomalies, but it unfortunately also can be used by hackers to increase the impact of their actual cyberattacks. Therefore, the integrity and security of AI systems are equally important to the ethical programming of the actual AI.

It is critical that AI systems are capable of processing data inputs while also respecting users' privacy. This requires encryption of all communications, ensuring the privacy and authenticity of the data. I've found that edge AI systems are beginning to incorporate some of the most sophisticated cryptography solutions available in the marketplace today.

An emphasis must be put on ways to leverage hardware security as a means of preventing and defending against Al attacks that are able to retrieve sensitive information from secure systems. This will improve the system security and privacy of data overall. Devices with sophisticated security must include countermeasures to ward off anticipated logical and physical attacks.

A big challenge today is that the AI ecosystem is a patchwork of contributions from ranging creators, yet consistency and complete integration are core requirements of ethical AI. Currently, the level of accountability and the amount of trust among contributors are not equal or consistent. If there's even a tiny vulnerability in the "security and privacy by design" principle, the complete ecosystem could crumble if uncovered by attackers. Therefore, every participant in the development and execution of AI must work toward security that is interoperable and assessable.

It's going to take time for AI actors to agree to a universal code of ethics and even longer for end users to believe that devices will only do what's best for humanity.

We have a lot of work to do to prepare. Safety and security provisions across all edge computing must first be standardized. As we join forces to develop secure and trustworthy AI systems for the future, certification of the connectivity, silicon and interactions must be the priority for both chipmakers and their customers.

As you see, when it comes down to it, AI ethics are really about security.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website