Navigating Generative AI Governance: Infosec Leader Perspectives
Organizations are looking to generative AI (GenAI) governance as the tech’s risks and opportunities continue to emerge. What are their strategies and how many plan to implement generative AI for security?
One minute insights:
- Nearly two-thirds of surveyed leaders say their organization has or is developing a generative AI governance strategy
- Over one-third have no plans to oer training on proper usage of generative AI tools, but some have a corporate policy
- Nearly all respondent organizations are considering generative AI use cases for security, with chatbots and threat intelligence as common choices
- The most common generative AI vendor strategies among respondents include evaluating model transparency or privacy policies
- Most surveyed leaders plan to hire in the next 12 months to meet generative AI governance or security needs
One-fifth of respondent organizations have established a GenAI governance strategy
Over half (56%) are facing challenges with team or skills gaps when it comes to generative AI governance.
And many respondents say their organization is struggling with the regulatory landscape (46%) or unclear governance roles and responsibilities (44%).
Question: Please share any final thoughts on the current state of generative AI governance at your organization. Feel free to elaborate on how you think it could evolve over the next year.
The biggest issue for us at the moment is around ensuring the output of generative AI is properly tested at scale to ensure errors are below an acceptable rate, or (for different use cases) that output is properly reviewed by a qualified human rather than a cursory review.
Shadow use of GenAI continues to be the biggest governance challenge. I expect tools to evolve relatively quickly to identify and block.
Most are working on corporate policies for GenAI but over one-third have no plans to oer training on proper usage
23% of surveyed leaders say training on proper usage of generative AI is required at their organization, with 11% indicating that this requirement applies to all employees.
Question: Please share any final thoughts on the current state of generative AI governance at your organization. Feel free to elaborate on how you think it could evolve over the next year.
I'm [in] the education sector and there is a lot of discussion about plagiarism but very little [that] I've seen about AI generated content or misinformation being included, unknowingly, by students or professors in citations or references.
It's a race against time. We don't want to be the ones holding things up so between now and having better governance, education is the way we're approaching it.
The majority are considering GenAI use cases for security, with vulnerability management as the most commonly cited application
93% of surveyed leaders with significant involvement in GenAI security or risk management efforts (n = 226) indicate that their organization is considering generative AI use cases for security operations.
Over half (58%) are still exploring potential use cases, but 35% report that their organization has already identified which ones to pursue.
Among respondent organizations that are exploring or pursuing generative AI use cases for security operations (n = 210), the most commonly considered applications are vulnerability management (50%), chatbots (47%) and reporting and/or documentation (46%).
Over one-third list secure application development assistants (38%) or threat intelligence (37%) as use cases under consideration.
Question: Please share any final thoughts on the current state of generative AI governance at your organization. Feel free to elaborate on how you think it could evolve over the next year.
This is an exciting field. I see generative AI showing up in new ways that will continue to support more and more new ideas and uses. However, I am very skeptical of the output and am curious how this will impact adoption. For example, a false positive AI detector that has false positives would be very bad...and that is the current state of AI.
“Generative AI might be the future but I still have my reservations. Vendors will give assurances and licenses but [it’s] not IF something goes wrong, to me, it's a matter of WHEN. I fear all what they are oering right now will go down the drain. A bit pessimistic? YES, it is.”
Model transparency and vendor privacy policies are common focuses of respondents’ GenAI vendor strategies
58% of respondents with significant involvement in GenAI security or risk management eorts (n = 226) say their organization’s vendor strategies include evaluating third-party applications for model transparency requirements or evaluating vendor privacy policies.
Over half (55%) require, or plan to require, that vendors include data governance assurances in their license agreements.
More than three-quarters (76%) plan to hire staff over the next 12 months to address generative AI governance or security needs.
One-third (33%) intend to hire only internal resources for this purpose, while 24% plan to hire both internally and through outsourcing or consulting.
Question: Please share any final thoughts on the current state of generative AI governance at your organization. Feel free to elaborate on how you think it could evolve over the next year.
We are steadily drinking the AI Kool-Aid. We have a first draft of policies that range from must be, to it would be nice, and right now the vendors we are in discussion with are hitting 95% for those.
When we use GenAI, we still review it based on [the] ISO 27001 framework.
Want more insights like this from leaders like yourself?
Click here to explore the revamped, retooled and reimagined Gartner Peer Community. You'll get access to synthesized insights and engaging discussions from a community of your peers.