Free On-Demand Podcast

The Unplanned Show, Episode 4: Responsible Generative AI with Sriram Subramanian

Generative AI is a rapidly-evolving ecosystem with a lot of attention. In this episode, Dormain Drewitz asks Sriram Subramanian about the main challenges to responsibly implement generative AI, including content that’s harmful, inaccurate or violates privacy or security standards. Sriram discusses Microsoft’s six tenets to responsible generative AI, as well as the notion of shared responsibility between platform providers and foundational LLMs and the developers and data engineers building on top. Sriram also answers questions about where to get started safely with generative AI and shares his framework for identifying opportunities to add value.

Resources mentioned:
Marc Andreesen article

Summary created with help from chatGPT

In this episode of the Unplanned Show, the host interviews Sriram Subramanian from Microsoft about responsible generative AI, exploring both ethical and practical considerations. Sriram, a Principal Program Manager at Microsoft, emphasizes the heightened concerns with the power of generative AI capabilities, specifically addressing issues related to harmful content, inaccuracies, and security/privacy challenges. He highlights the potential for biased and inappropriate content generation, urging developers and organizations to be vigilant. Organizations may face challenges when integrating generative AI into their workflows. Memorable quotes include Sriram’s analogy of large language models behaving like confident teenagers when generating content without full knowledge.

“LLMs are like a high school teenager who, if they don’t know the content, have every audacity and courage to write something with the most confidence as if they are factual. That’s very much a possibility with generative AI.”

Next, the host and Sriram delve into the question of whether challenges related to harmful content, inaccuracies, and security/privacy in generative AI can be solved within the large language models (LLMs) themselves or require external checks and balances. Sriram emphasizes Microsoft’s core responsibility principles, centered around fairness, transparency, accountability, inclusiveness, privacy, reliability, and safety. He outlines the industry’s recommended approach involving governance, rules, training, and tools to enforce responsible AI practices. Sriram sees parallel developments in the industry, with LLMs improving inherently on responsible AI practices while companies work on making these capabilities more invisible to end-users. He stresses the importance of shared responsibility, where both providers and application developers/consumers play a role in ensuring the safety, reliability, and bias-free nature of generative AI applications.

“It’s ultimately going to be a shared responsibility. An application developer—an end consumer—should also do their part of ensuring security, privacy. Ensuring whatever you’re thinking about using for applications follows best practices if you’re developing applications.”

Then, the focus is on the concept of making the right thing easy and the shared responsibility in addressing challenges within generative AI. The discussion revolves around the layers involved in ensuring responsible AI, with the foundational model at the core, followed by safety systems, application development, and user experience. The host notes the parallel with platform engineering questions in DevOps and the importance of making the process sustainable for building reliable, safe, and secure code. Sriram adds to the discussion, outlining the four layers and emphasizing that while foundational model safety systems are primarily the responsibility of vendors and platforms, application developers also play a role in adding safety mechanisms. The continuous and iterative nature of the process parallels DevOps practices. There is also a need to be prepared for occasional disturbances in the complex architecture of generative AI.

“It’s more of the identify, measure, mitigate, and operate loop. Once you identify what the issue is, then you put systems in place, and you want to measure that, and then you continue that. It’s never going to be just like a one-time effort; it’s going to be a continuous process.”

The discussion covers practical recommendations for individuals or companies getting started with generative AI. Sriram emphasizes a phased approach, starting with easy wins that don’t involve introducing sensitive data, such as utilizing foundational models for generating user documents or chat responses. He outlines three levels of implementation: the first involving simple use cases, the second incorporating AI as a code copilot, and the third layer introducing custom data for more accurate and relevant outcomes. The conversation also touches upon the importance of responsible practices, including refining prompts for better outcomes, leveraging zero-context, zero-one, or multi-context learning, and implementing content moderation and rate limiting to ensure ethical use. Sriram stresses the need for a responsible mindset, recommending users not take generated content as is and always applying filters or content moderation.

“Right now, the understanding or the thought process around context learning is more beneficial, more accurate, or at least, it’s a much easier way to get to the required level of accuracy than trying to retrain the model using fine-tuning.”

In the conclusion of the interview, Sriram outlines three key dimensions—foundation, form, and fit—when considering opportunities in the generative AI space. He suggests that differentiation can occur by innovating at the algorithmic level (foundation), introducing new user experiences (form), or specializing in specific domains or languages (fit). The discussion emphasizes the importance of understanding these axes for startups or businesses seeking to leverage generative AI effectively.

“If you are trying to build a startup or if someone is trying to [decide] ‘how do I differentiate, what do we build’ using generative AI, I like to use a model of foundation, form, and fit.”

Watch the interview

Free On-Demand Podcast

The Unplanned Show, Episode 4: Responsible Generative AI with Sriram Subramanian

Generative AI is a rapidly-evolving ecosystem with a lot of attention. In this episode, Dormain Drewitz asks Sriram Subramanian about the main challenges to responsibly implement generative AI, including content that’s harmful, inaccurate or violates privacy or security standards. Sriram discusses Microsoft’s six tenets to responsible generative AI, as well as the notion of shared responsibility between platform providers and foundational LLMs and the developers and data engineers building on top. Sriram also answers questions about where to get started safely with generative AI and shares his framework for identifying opportunities to add value.

Resources mentioned:
Marc Andreesen article

Summary created with help from chatGPT

In this episode of the Unplanned Show, the host interviews Sriram Subramanian from Microsoft about responsible generative AI, exploring both ethical and practical considerations. Sriram, a Principal Program Manager at Microsoft, emphasizes the heightened concerns with the power of generative AI capabilities, specifically addressing issues related to harmful content, inaccuracies, and security/privacy challenges. He highlights the potential for biased and inappropriate content generation, urging developers and organizations to be vigilant. Organizations may face challenges when integrating generative AI into their workflows. Memorable quotes include Sriram’s analogy of large language models behaving like confident teenagers when generating content without full knowledge.

“LLMs are like a high school teenager who, if they don’t know the content, have every audacity and courage to write something with the most confidence as if they are factual. That’s very much a possibility with generative AI.”

Next, the host and Sriram delve into the question of whether challenges related to harmful content, inaccuracies, and security/privacy in generative AI can be solved within the large language models (LLMs) themselves or require external checks and balances. Sriram emphasizes Microsoft’s core responsibility principles, centered around fairness, transparency, accountability, inclusiveness, privacy, reliability, and safety. He outlines the industry’s recommended approach involving governance, rules, training, and tools to enforce responsible AI practices. Sriram sees parallel developments in the industry, with LLMs improving inherently on responsible AI practices while companies work on making these capabilities more invisible to end-users. He stresses the importance of shared responsibility, where both providers and application developers/consumers play a role in ensuring the safety, reliability, and bias-free nature of generative AI applications.

“It’s ultimately going to be a shared responsibility. An application developer—an end consumer—should also do their part of ensuring security, privacy. Ensuring whatever you’re thinking about using for applications follows best practices if you’re developing applications.”

Then, the focus is on the concept of making the right thing easy and the shared responsibility in addressing challenges within generative AI. The discussion revolves around the layers involved in ensuring responsible AI, with the foundational model at the core, followed by safety systems, application development, and user experience. The host notes the parallel with platform engineering questions in DevOps and the importance of making the process sustainable for building reliable, safe, and secure code. Sriram adds to the discussion, outlining the four layers and emphasizing that while foundational model safety systems are primarily the responsibility of vendors and platforms, application developers also play a role in adding safety mechanisms. The continuous and iterative nature of the process parallels DevOps practices. There is also a need to be prepared for occasional disturbances in the complex architecture of generative AI.

“It’s more of the identify, measure, mitigate, and operate loop. Once you identify what the issue is, then you put systems in place, and you want to measure that, and then you continue that. It’s never going to be just like a one-time effort; it’s going to be a continuous process.”

The discussion covers practical recommendations for individuals or companies getting started with generative AI. Sriram emphasizes a phased approach, starting with easy wins that don’t involve introducing sensitive data, such as utilizing foundational models for generating user documents or chat responses. He outlines three levels of implementation: the first involving simple use cases, the second incorporating AI as a code copilot, and the third layer introducing custom data for more accurate and relevant outcomes. The conversation also touches upon the importance of responsible practices, including refining prompts for better outcomes, leveraging zero-context, zero-one, or multi-context learning, and implementing content moderation and rate limiting to ensure ethical use. Sriram stresses the need for a responsible mindset, recommending users not take generated content as is and always applying filters or content moderation.

“Right now, the understanding or the thought process around context learning is more beneficial, more accurate, or at least, it’s a much easier way to get to the required level of accuracy than trying to retrain the model using fine-tuning.”

In the conclusion of the interview, Sriram outlines three key dimensions—foundation, form, and fit—when considering opportunities in the generative AI space. He suggests that differentiation can occur by innovating at the algorithmic level (foundation), introducing new user experiences (form), or specializing in specific domains or languages (fit). The discussion emphasizes the importance of understanding these axes for startups or businesses seeking to leverage generative AI effectively.

“If you are trying to build a startup or if someone is trying to [decide] ‘how do I differentiate, what do we build’ using generative AI, I like to use a model of foundation, form, and fit.”

Watch the interview

"The PagerDuty Operations Cloud is critical for TUI. This is what is actually going to help us grow as a business when it comes to making sure that we provide quality services for our customers."

- Yasin Quareshy, Head of Technology at TUI

Top 50 Best Products for Mid-Market 2023 Top 50 Best IT Management Products 2023 Top 100 Best Software Products 2023