At a time when breakthrough technologies are changing the landscape of our daily lives, the convergence of GenAI and cutting-edge 6G networks is paving the way for unprecedented transformation across a variety of sectors. In a conversation with ETCIO, Balakrishna DR (Bali) delves into how these technological leaps are reshaping the business environment and social interactions.
Additionally, Mr. Balakrishna will share industry perspectives and practical advice for navigating the dynamic landscape of AI, highlighting Infosys’ Responsible AI Framework. The conversation will also explore recent AI policies that address responsible deployment, data security, and the evolving regulatory landscape.
What transformational impact do you expect upcoming trends such as 6G and generative AI to have across industries?
GenAI leverages 6G-enhanced edge computing capabilities to enable real-time processing and decision-making closer to the edge. This is critical for applications that require fast response times, such as autonomous mobile robots (AMR), intelligent drones, and IoT devices.
The enhanced connectivity provided by 6G networks facilitates coordination and communication between AI-enabled devices, enabling more seamless integration and collaboration. This improves reliability and efficiency across a variety of applications, including precision agriculture, advanced manufacturing, the energy sector, and healthcare.Balakrishna DR, Executive VP, Global Head of AI and Industry Verticals, Infosys
For example, in the manufacturing industry, multimodal GenAI combined with powerful enablers such as 6G, 3D printing, and robotics can help design and design complex tools based on customized and personalized requirements in a significantly reduced time frame. It will be possible to manufacture. Manufacturers will be able to respond quickly to changing consumer preferences and create personalized products with shorter lead times. This creates the creation and demand for new products and services that were previously unimaginable. The convergence of next-generation technologies and GenAI will also change the way we expand and redefine the path to virtual collaboration through simulation, digital twin generation, and AR/VR. Fully realize human-machine interaction.
Can you share your industry perspective and practical advice for navigating the dynamic landscape of AI?
AI, like any technology, has its own challenges, including the need for high-quality data, explainability of decisions, fairness, security, and human oversight. While AI creates breakthrough opportunities, it also raises questions about bias, security, privacy, and trust.
Infosys has developed and adopted the Responsible AI Framework to overcome ethical AI challenges and build trusted systems powered by Infosys Topaz. The framework addresses five aspects: people and planet, economic context, data and inputs, AI models, and tasks and outputs.Balakrishna DR, Executive VP, Global Head of AI and Industry Verticals, Infosys
Each further has its own subdimensions. People and the planet have stakeholders, human rights, etc. An important sub-dimension of data and input is managing bias. This includes processes and audits to ensure demographic diversity is represented in training data and systemic inequities against disadvantaged people. filtered out. For any future AI project, information will be collected along these dimensions and sub-dimensions that address issues including ethical AI principles such as fairness, transparency, accountability, privacy, and security.
Applying this framework calculates a risk score that identifies a project’s risk category. If the risk categories fall within the acceptable range, the project is approved to proceed. Otherwise, further risk mitigation steps are recommended or the project is rejected.Balakrishna DR, Executive VP, Global Head of AI and Industry Verticals, Infosys
What are the best practices for integrating AI while ensuring regulatory compliance?
“Responsibility by design” is one of the cornerstones of an AI-first approach. To implement this, we enhanced our existing framework to cover important AI-specific areas. This approach is utilized in all AI use cases and ensures that risks and mitigation strategies are prepared and discussed with all stakeholders. Additionally, we have begun efforts to codify these policies into the AI engineering lifecycle to ensure automated compliance.
We also created services to help customers deploy responsible AI across their enterprises.
Infosys Responsible AI Toolkit: A collection of specially designed Responsible AI pipelines and API endpoints that can be integrated into your AI development lifecycle. It enables automation through design principles and helps data scientists and developers protect against various risks. Supports all kinds of AI models, use cases, and data types.
Infosys GenAI guardrails: It’s a moderation platform that detects and mitigates anomalous prompts and output in generative AI systems. Detect various threats in prompts such as personally identifiable information (PII) leakage, prompt injection, copyright infringement, requests for harmful content, and output such as hallucinations and inappropriate content according to your organization’s policies. It can be used to reduce .
Infosys Responsible AI Gateway: It is an automated Responsible AI platform that is embedded into an organization’s core systems and mission-critical workflows, ensuring responsible AI protocols are followed in day-to-day operations. Check for risk transfer in critical workflows involving AI.
Infosys AI Security Platform: An enterprise AI security platform that detects and responds in real-time to various attacks on models, including poisoning, evasion, inference, and injection.
Customers are increasingly aware of the need for AI systems to function responsibly and ethically, and we expect this to become a key consideration when building AI systems in the future.
How do recent updates to AI policy address responsible deployment and data security issues?
The policy landscape is rapidly evolving across regions. In 2023, a number of comprehensive regulations such as the EU AI law will be enacted, AI-related bills will be introduced in various US states, and targeted interventions on specific issues are also being considered. . These regulations address multiple safety issues such as human oversight, fairness, transparency, security, reliability, and critical areas.
However, regulations designed for AI must take into account the following considerations:
Minimize regulatory delays and be adaptable: The field of AI is evolving rapidly, so there is a risk that regulations introduced will be outdated by the time they are enacted. This is because the pace of technological advancement far exceeds traditional regulatory processes. To remain relevant in this dynamic environment, regulations need to be flexible and adaptable. Additionally, given the short shelf life of today’s AI models, it is best for regulations to focus on preventing specific harms and protecting human rights and safety outcomes, rather than focusing solely on the technology.
One comprehensive law or strengthened sectoral laws: Risks and concerns about AI vary across different industry use cases. For example, the same category of use cases in certain industries may involve more risk than others. Consider the use case of recommending a product or service to a customer. The negative impact could be more severe in health care and financial services than in retail. Some sectors already have strict regulations in place that address scenarios such as discrimination, disclosure, safety, and reliability.
Addressing the complex nature of AI development and use: The AI value chain consists of multiple players providing data management platforms, computing platforms, development platforms, models, and potentially different categories of end users. A one-size-fits-all approach will not address all the complexities involved in these multi-party scenarios. Responsibility for compliance should be distributed proportionately across the chain, rather than being placed solely on one party.
Global harmonization of regulations: A patchwork of regulations across different regions can create challenges for companies operating in multiple markets. Compliance costs such as audits, enforcement, and reviews can increase exponentially, making AI implementation impossible from a business perspective.
Enforcement challenges result in unequal access to AI for smaller players. Effectively enforcing complex AI regulations requires expertise and resources that small and medium-sized businesses often lack. Addressing deep-seated issues like algorithmic bias, AI security threats, and black-box model transparency requires advanced technology and specialized skill sets.
While big tech companies can easily comply, strict regulations can negatively impact the use of AI by small businesses and startups. This could hinder innovation and democratization of AI. Regulations therefore need to take this into account and put appropriate safety nets in place for smaller players.
Regulatory sandbox: Regulatory sandboxes and R&D exemptions are important for AI. AI is still going through many innovation cycles and we need to provide the freedom to experiment so that innovation is not hindered.