Risk and Reward: A Look at the Future of AI in Communications
Fireside Chat @ INCOMPAS SHOW: Chip Pickering, CEO INCOMPA and Dave Stehlin, CEO TIA
We are excited to share that our CEO, Dave Stehlin, recently engaged in a thought-provoking conversation with Chip Pickering, CEO INCOMPAS, at the INCOMPAS Show. They delved deep into the important subject of licensing, certification, and standards for AI solutions, with the goal of safeguarding critical infrastructure, products, and services. Here is a brief overview of the enlightening discussion that took place during the session:
Q1: There is a growing debate around whether there should be greater oversight of AI through regulation, licensing, certification and standards setting. If the industry or Congress should move to implement some of these proposals, can you discuss how TIA could play a role in this area?
The US Government is moving rapidly to better understand and perhaps regulate AI. Initial guidelines have already been given in the form of the National AI Initiative Act of 2020 which became law on January 1, 2021. This law includes provisions which include:
“The Advisory Committee shall advise the President and the Initiative Office on matters related to the Initiative, including recommendations related to—opportunities for international cooperation with strategic allies on artificial intelligence research activities, standards development, and the compatibility of international regulations…”
An outcome of the Act has been the development of the Artificial Intelligence Risk Management Framework (AI RMF 1.0) developed by the National Institute of Standards and Technology (NIST). On July 29, 2021, NIST released an initial draft of the Framework, and by August 18, 2022, it had already been revised twice, demonstrating the rapid evolution of the technology.
The AI RMF is intended to provide guidance “to address risks in the design, development, use, and evaluation of AI products, services, and systems.” The AI RMF is nonbinding, like many NIST standards, but expectations are that elements of it will be accounted for in emerging industry standards.
AI may be an example where government intercession is coming earlier than needed as compared to other historical examples. BUT, considering the potential consequences in the case of AI, in this case, earlier is probably better than too late after severe and negative unintended consequences may be experienced. As Elon Musk has opined, “AI is one of the biggest risks to the future of civilization”.
Industry response to such challenges and the rapid emergence of new technologies is a prime target for industry standardization. TIA’s unique in our ability to convene the relevant stakeholders from across the industry. We bring together business executives, technologists, policymakers, analysts and others to address challenges exactly like this one. We are already beginning to understand the regulatory landscape to help our members prepare and provide input to shape future policies and guidelines.
And I will say that we have all experienced significant increase in cybersecurity insurance costs, and these will continue and perhaps accelerate because of risk associated with AI.
Q2: Are there other models you can point to where TIA has had success with setting and certifying compliance with standards?
TIA with our members, has done this multiple times w/ different types of standards.
Recently, TIA introduced SCS 9001, the ICT industry’s first cyber and supply chain security standard, which helps suppliers of ICT products and services develop, manage, measure, and certify processes to build trust into information communications technology solutions. TL 9000 is another good example of a problem that existed, and how a standard helped solve the problem.
That standard was created because there was a massive telecom outage on the East Coast. The USG got involved and said ’industry, either you fix this or we’ll tell you how to fix it’. So, large telcos/ISPs got together and drove the rest of the industry to create a measurable, and certifiable standard with clear requirements. The vendors then had to modify their product development processes and start tracking performance vs the requirements. The TL 9000 benchmarking program provides irrefutable, empirical evidence of how adoption of a standard drove continuous quality improvements in an ICT sector and helped our participants improve their results.
The quality of the vendor’s products and the ISP networks has significantly improved as a result. Even now 20+ years later, it is still used around the world.
The TIA Edge Data Center standard is a more recent example- in less than 2 years, independent certification bodies (CBs) have already certified about 300 data centers around the world.
These standards are living documents and the participating companies regularly update them to continuously improve the output to address the evolving market and technology conditions and stay relevant.
Q3: From your perspective, could industry standards and certification of products address some of the risk associated with AI products and use in the market?
Absolutely! Artificial intelligence and machine learning are not new concepts. Such technologies have been in use for years, but in a relatively narrow set of special use cases.
AI is now being driven to mainstream commercial adoption across a virtually limitless set of use cases and industries. As one datapoint, ChatGPT had 100M active users in January 2023, even faster than Tik Tok. That makes it the fastest growing application in history! (source – UBS research as reported by Barrons). AI, as an outgrowth of ML, will continue its expansion into network planning and operations as it offers potential for significant improvements.
But there are also huge risks without proper controls. A process improvement standard, aligned with the leading government frameworks, can be used to significant mitigate risk and offer continuous improvement. Clear requirements, built into a voluntary industry standard, that are measurable and certifiable is something that TIA has done for many years. And we are all better off to have industry-led standards rather than the heavy hand of government prescribing unworkable regulations.
It would seem that this would align with the guidance from NIST found in their AI RMF which states:
“Incorporation of the AI RMF in international standards will further the Framework’s value as a resource to those designing, developing, deploying or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI Systems.”
Government guidance and expert opinions express the need for transparency when it comes to AI systems and the resulting preservation of privacy and security and mitigating risk. Independently certifiable industry standards are a great way of demonstrating that transparency.
TIA is in the process of adding a new ‘lane’ to our Cyber and Supply Chain Risk Management standard SCS 9001. This effort is intended to specifically address IoT. We will first work on consumer IoT needs for supply chain security and then broaden it out.
Creating an AI ‘lane’ to the SCS 9001 standard is certainly a solid route to further explore and a distinct possibility.
Q4: Many have expressed concern about AI’s possible unintended consequences, including the potential for heightened cyber security risks. How could an organization like TIA work with companies to address and mitigate this area of identified risk?
Fast evolving technologies such as AI with the potential for unintended consequences are ideal for applications of certifiable industry standards. The potential risks of poor implementations and lack of competencies in administering the technology are far too risky.
Again – industry is always better off by staying ahead of government as government-driven requirements are typically very heavy handed and burdensome.
TIA has a long history of developing international standards that solve significant problems, satisfy government and improve the Information Communications Technology industry. We bring together experts as a recognized Standards Development Organization and use best practices to rapidly and fairly complete the work.
There are substantial existing works in the areas of cybersecurity and supply chain security, such as TIA’s SCS 9001 Supply Chain Security standard, that can be the basis for adoption for needs of standardizing AI capability. This need not be a start from scratch problem – application of existing standards provides for a significant head start in solving the need for comprehensive AI industry standards.
TIA has more than 1000 volunteers involved in creating and updating our standards, this is an industry-wide opportunity.
Q5: As we close out our time together, can you share with us your view of what industry and companies that develop and/or utilize AI should be thinking about with regard to the possibility of standards and certification for AI?
TIA is an organization that has operated for over 80 years. We are both an industry association and a standards development organization. We are technology and network agnostic. Our focus is on the ICT industry and our standards are developed by industry volunteers and for the benefit of all.
Earlier this year the White House released a document in combination with leading Internet companies, many of whom are INCOMPAS members, to frame the policy debate on AI. This document calls out 3 pillars – Safety, Security and Trust.
Creating a process based, measurable standard would be a significant step forward to ensure the design of AI products and services are developed in a way to ensure Safety, Security and Trust.
We invite organizations who share our vision to collaborate with us in advancing capabilities to address the requirements for overseeing and managing this transformative new technology.