Program Catalog
Browse sessions and workshops using the filtering options below.
Join us in Phoenix 8–10 May 2024 for informative sessions and practical guidance.
Dive into world-class sessions and content brought to you by leaders in industries including security, privacy, AI, digital trust and more while you earn CPE credits.
7:30 am–12:00 pm Workshop Registration
3:00 pm–6:00 pm Registration
8:30 am–5:00 pm Workshops*
5:00 pm –6:30 pm Welcome Reception
7:00 am–5:30 pm Registration
8:30 am–11:30 am Opening Keynote & General Sessions
11:45 am–12:15 pm Concurrent Sessions
12:15 pm–1:15 pm Lunch
1:15 pm–5:30 pm Concurrent Sessions
7:00 am–5:30 pm Registration
8:30 am–10:45 am Concurrent Sessions
11:00 am–1:00 pm General Session & Lunch
1:00 pm–5:15 pm Concurrent Sessions
5:30 pm–7:30 pm Awards & Social Event
7:00 am–11:30 am Registration
8:30 am–10:00 am General Session
10:30 am–11:30 am Closing Keynote
All times are local.
*Additional registration required.
Tuesday, 7 May | 8:30am-5:00pm
Ryan Cloutier
AI is already sweeping through our businesses and our world, and the need for leaders in security and risk and executives to keep pace with the rapidly changing risk landscape is urgent. The increase of AI raises exciting opportunities; however, the associated risk exists and should be considered, as its impacts can result in substantial consequences. This workshop will cover risks associated with AI, and how to identify, assess and respond to them. It will also cover how to educate senior management on the state of AI and what strategies should be put in place to prepare for this rapidly evolving technology.
After completing this session, the participant will be able to:
Tuesday, 7 May | 8:30am-5:00pm
Nancy Yuen, Senior Director, SOX Governance and Financial Data Management
Emotional Intelligence is important, especially for leaders. It is the proficiency to manage your emotions and value and respond appropriately to the emotions of people around you. This workshop will cover skills one can use in the workplace with colleagues and at home. These skills will help you successfully lead teams, communicate feedback, understand your boss, collaborate with others, and manage stress.
After completing this session, the participant will be able to:
Tuesday, 7 May | 8:30am-5:00pm
Mark Thomas, CRISC, CGEIT, CDPSE, President
Digital trust is central to every digital interaction. In today's world, people are more connected than ever before. The Internet has brought more opportunities to exchange ideas and information within our neighborhood and worldwide. Customers can purchase goods online and receive them the same day. Technology works in the background to support these interactions and transactions between individuals, enterprises and external parties. The digital trust framework helps guide professionals on their digital transformation journey and encourages them to think about how digital trust needs to be considered at all levels of an enterprise. This workshop will cover the essential components of the framework and how to apply them in various situations in the work environment.
After completing this session, the participant will be able to:
Browse sessions and workshops using the filtering options below.
AI is already sweeping through our businesses and our world, and the need for leaders in security and risk and executives to keep pace with the rapidly changing risk landscape is urgent. The increase of AI raises exciting opportunities; however, the associated risk exists and should be considered, as its impacts can result in substantial consequences. This workshop will cover risks associated with AI, and how to identify, assess and respond to them. It will also cover how to educate senior management on the state of AI and what strategies should be put in place to prepare for this rapidly evolving technology.
After completing this session, the participant will be able to:
Emotional Intelligence is important, especially for leaders. It is the proficiency to manage your emotions and value and respond appropriately to the emotions of people around you. This workshop will cover skills one can use in the workplace with colleagues and at home. These skills will help you successfully lead teams, communicate feedback, understand your boss, collaborate with others, and manage stress.
After completing this session, the participant will be able to:
Digital trust is central to every digital interaction. In today's world, people are more connected than ever before. The Internet has brought more opportunities to exchange ideas and information within our neighborhood and worldwide. Customers can purchase goods online and receive them the same day. Technology works in the background to support these interactions and transactions between individuals, enterprises and external parties. The digital trust framework helps guide professionals on their digital transformation journey and encourages them to think about how digital trust needs to be considered at all levels of an enterprise. This workshop will cover the essential components of the framework and how to apply them in various situations in the work environment.
After completing this session, the participant will be able to:
Kick off the conference right with a welcome reception. Connect and network over drinks and conversation with professionals in your industry.
What do the new SEC cybersecurity rules mean for your organization? During this showcase, we will cover the oversight and governance considerations for the SEC rule and how to best be prepared to manage them.
After completing this session, the participant will be able to:
According to the Edelman Trust Barometer trust in media, business, and government are at an all-time low. We have never believed less, agreed less, or trusted less. Yet, building trust, and developing the community that surrounds it, remains imperative for any art project, online influencer, startup or brand. So what are the new ways to build trust and community online in an era of mistrust? Based on the research and current case studies, what works, what doesn’t? And how do we figure out who to really believe and rally behind? New York Times bestselling author Neil Pasricha shares the emerging three principles that answer these questions followed by a live case study featuring Frank Warren of PostSecret.com
Even champions need a break sometimes. Whether you have some downtime between sessions or need a few minutes of peace and quiet to catch up on your work, if you are an ISACA Platinum and Gold Member, our Loyalty Lounge offers the perfect respite to help you stay refreshed and focused.
In this session the audience will learn the following key concepts:
1. An overview of CMMC and the current state DOD expectations.
2. An update on the CMMC proposed final rule and the deltas therein.
3. Considerations for identifying CUI and defining your scope.
4. Building an enclave or injecting NIST 800.171 as a foundational activity.
5. Next steps to get your house in order.
After completing this session, the participant will be able to:
Thinking of becoming a member? Already are a member? Now what? When you participate in ISACA’s community, you gain a valuable, global network of professionals who can help you build your knowledge and skills, increase your prospects with current and future employers, and get you moving toward a more rewarding career.
But how do you make meaningful connections and get involved with ISACA?
Join the ISACA Community Engagement team to jumpstart your involvement and learn how to enrich your professional journey. We will share a brief 15-minute presentation. And by the end of this session, you will be able to:
Become a natural networker! This is your chance to discover ways to join the conversation!
Organizations worldwide are vulnerable to cyberattacks, potentially impacting life and health, clean water, safe food supply, economy, functioning of government, peaceful existence, and global stability. New analytical techniques are needed to help understand the likelihood of adversary actions in the cyber domain. To create meaningful analyses of adversary likelihood, we must first ask, "What factors influence their decision to attack?"
In this talk, we present a new framework currently in development that seeks to incorporate an understanding of adversary motivations, capabilities and resources and recognize how our adversaries' national-level goals may drive their strategies to attack in the cyber domain. Framing and organizing information about the adversary to provide a structure and lexicon to organize sparse and incomplete data helps us to recognize where knowledge gaps exist, conceptualize adversaries, and identify the range of possible adversary actions. That allows us to develop long-term perspectives on potential cyber-attacks and evaluate potential mitigation measures.
The objective of this methodology is not about predicting the future but rather anticipating possibilities for the future to inform strategies that will make organizations more resilient and effective across a range of those possibilities. Those analysis results can then be used to inform planning and decision-making about potential remediation measures.
After completing this session, the participant will be able to:
Let's take a step back to have context about the options and opportunities for compliance and cyber professionals to take their knowledge and body of work further for organizations they work with or consult at. The NIST standards and guidance provide an excellent reference to enhance enterprise cyber plans and programs. With the emerging DoD CMMC standard, the future is bright to ensure a resilient global cyber supply chain. The FUSION of NIST with the future of CMMC certification provides an opportunity to establish a credible, evidence-based cyber program for small, medium and large organizations globally!
On 31 January 2020, the US Department of Defense (DoD) introduced a new cybersecurity standard, the Cybersecurity Maturity Model Certification (CMMC). Every cybersecurity and compliance professional, including senior executives, must raise their awareness of this important and valuable cybersecurity standard developed by the DoD. The CMMC Model v1.02 was introduced on 18 March 2020. So, why the CMMC? Malicious cyberactors continue to target the Defense Industrial Base (DIB) and the DoD supply chain. This challenge to US national security, including economic security, raised the priority for the DoD to establish a credible and unified cybersecurity standard for organizations that provide services to it, i.e., the cyber supply chain. While this standard is now focused on the DoD DIB, it applies to all industries globally.
In this brief, examine the latest in NIST standards and guidance, including the NIST SP 800-171 r3 Draft, the forthcoming NIST Cybersecurity Framework 2.0 and more. Examine opportunities to comply with NIST and achieve credible certification with the Cybersecurity Maturity Model Certification (CMMC) standard from the DoD. Future cyber risk is ensuring a resilient cybersecurity supply chain. CMMC is about a credible, evidence standard to secure assets and the supply chain. Federal agencies, state governments, and the private sector will find opportunities to improve cyber defense with the CMMC standard.
After completing this session, the participant will be able to:
When an organization is attacked through a breach of information security controls, the law requires the organization to notify individuals that their personal identifiable information (PII) has been exposed. It has a privacy risk and may cause significant harm to the individual. When accessed data is transferred to unauthorized individuals to foster criminal activity, the personal identifiable data elements become essential. Criminals harvest and sell this information on areas of the Internet known as the dark/deep web to propagate further criminal activity. This is the primary reason why funding secure measures in organizations is important to protect organizations from data and privacy breaches that result in this type of impact on citizens.
This session will explain the concept of identity takeover using a concrete example. This session will share a strategic framework that could be used by various stakeholders involved in the implementation of cybersecurity measures to safeguard sensitive data and leverage a data-centric focus on the evolution of cyber-attacks. Specific security measures are important and should be implemented appropriately to alleviate cybersecurity threats. The information provided in this session will give the necessary data to show that the cybersecurity decision-making process is can be clearly integrated with risk management methodologies.
After completing this session, the participant will be able to:
If you start your next PCI Report on Compliance or Self-Assessment without an accurate scope, you'll waste time, go over budget, end up with remediation work you didn't plan for, and likely feel stressed. At the beginning of any PCI DSS assessment (or any compliance assessment or audit), you must start with scope. Understanding and knowing what's in scope for assessment is vital to creating, building, and maintaining a continuous PCI DSS Compliance program.
After completing this session, the participant will be able to:
Cybersecurity is now a board conversation. So is digital trust and the safety and security of Artificial Intelligence (AI). All are technology-driven challenges without a solution. Technologists can handle on their own. It requires collaboration up and down the organizational chart and globally. CISOs are charged with the safety and security of information in all forms. To succeed, CISOs need the active engagement of other parts of the business and often the engagement by external third parties. Who does the CISO engage? How? What do we ask of them? We will begin by discussing why these are suddenly important business topics. We will also talk through the historical trends of technology adoption still present today. That historical perspective will help us frame what works and does not. We will talk about what Digital Trust means, what it is, why it is important, what the studies show, and what resources are available. When it comes to AI, we will talk through the fundamental drivers underlying AI and how it differs from anything we have seen before. We will discuss AI's major concerns, what to do about it, and relevant global efforts. We will talk through the major threats of AI. Special consideration will be given to misinformation/ disinformation including Deep Fakes and the potential impacts to national security like the upcoming Presidential election.
Unlike many programs, we will be looking past technical controls. We will talk about the organizational structures that work the best, who in the organization needs to be engaged, and how to handle third parties. A survey of national and international efforts promoting the safety and security of AI will be included.
After completing this session, the participant will be able to:
Intelligence quotient (IQ) is useful in academia, but what about in our work environments? Is there something missing that IQ doesn’t address? Emotional Intelligence (EI) allows us to identify, assess, and manage our own emotions and understand those of others. This presentation will help you recognize and understand emotions while guiding your actions. EI is vital for effective leadership in today's complex and dynamic work environments. This session explores the key principles of leading with EI and provides practical insights and strategies for enhancing EI in leadership roles. Participants will gain a deep understanding of EI, its components and its significance in leadership, enabling them to apply these principles to drive better team dynamics, communication and organizational success.
By the end of this session, participants will be well-prepared to lead with EI, resulting in stronger relationships, improved team dynamics and more effective leadership in their respective organizations. The content will be delivered through presentations, discussions and practical tools to ensure a comprehensive and engaging learning experience.
After completing this session, the participant will be able to:
Are you struggling to evaluate risk in a timely fashion? Traditional risk analysis struggles to keep pace, making swift yet accurate assessments a pressing need. This session explores the transformative potential of Artificial Intelligence (AI) in expediting risk evaluations without sacrificing depth or security. This talk will delve into AI's capacity to rapidly detect patterns and vulnerabilities, emphasizing the importance of diverse training datasets to prevent biases. The presenter will address the balance between AI's computational strength and the irreplaceable intuition of human analysts, advocating for a collaborative approach. He'll highlight real-world successes, potential pitfalls, and provide a roadmap for integrating AI responsibly into risk assessment processes. Attendees will leave with a clear vision of AI's role in future-proofing risk analysis. In addition, attendees will learn immediate, actionable activities to immediately begin assessing risk rapidly with AI.
After completing this session, the participant will be able to:
What are the key technology risk areas that companies are now facing on a day-to-day basis? Where is IA focusing its efforts from a digital/technology perspective to create the highest impact and deliver the most business value to the organization? IT IA should be focused on the greatest technology risks to the organization in a way that enables the company’s business strategies and elevates IA’s value, impact, and trusted advisor status to its stakeholders.
As part of the presentation, we will discuss the evolving technology risk landscape and the risks it introduces and accelerates throughout the business environment and our organizations. We’ll introduce an IT risk framework that can be used to drive IA’s own IT risk assessment process for enhanced results – with the goal being to identify the greatest digital technology risks to the organization and what the optimal IA responses should be. We will dive into a top ten of IT IA high-impact focus areas for all IA functions to consider. Using real-life examples, for each topic area we’ll talk through the key risks involved, the questions IA groups should be asking, and suggested IA focus and response options to drive collaborative group discussion during the session.
After completing this session, the participant will be able to:
Imagine you're a superhero. You protect the people who rely on and exist within and around your organization from the ever-changing and ever-growing cyber threats across the globe. You need the right tools and the right strategies to fight against these digital threat actors. And what's the first gadget in your toolbelt? A clear understanding of the threat landscape to build a comprehensive threat assessment program.
For this program to succeed, you need four main components: your asset landscape, threat landscape, processes/controls and partners. While connecting these seemingly disjointed pieces sounds daunting, the industry offers several publications, mapped mainly through the Center for Thread Informed Defense, that act as a handy-dandy sidekick when brought together: MITRE ATT&CK®, Vocabulary for Event Recording and Incident Sharing (VERIS), NIST 800-30 Guide for Conducting Risk Assessments, and NIST 800-53 Security and Privacy Controls for Information Systems and Organizations. As for the partners' piece of this program, observe the people around you and find the other superheroes to join your team.
Finally, like any good superhero team, a Nick Fury exists – the executives. The magic lies within translating technical data-driven approaches and results into actionable tasks that support the business strategy. This translation, combined with your threat assessment program, will set you up to take on emerging threats as the world changes.
Join us as the "Cyber Threat Assessment Avengers" assemble!
After completing this session, the participant will be able to:
IT security, cybersecurity and privacy protection are vital for companies and organizations today. The ISO/IEC 27000 family of standards are designed to assist organizations in developing comprehensive information security and data privacy programs to avoid threats from evolving cybercrimes, lack of employee awareness and training and violations of laws and regulations and the costly fines and reputational damages that could result should a breach or other incident occur. ISO/IEC 27001 is the perhaps one of the world’s best-known standards for information security management; however, additional best practices in data protection and cyber resilience are covered by several other standards in the ISO/IEC 27000 family. Together, these standards enable organizations of all industries and sizes to manage the security and privacy of their information assets and those entrusted to them by third parties. We’ll also discuss where these standards stand now that ISO/IEC 27001:2022 has been published as of late 2022.
Specific standards that we will cover in our presentation will include:
• ISO/IEC 27001 (ISMS)–the foundational element
• ISO/IEC 27017 (cloud services security)
• ISO/IEC 27018 (protection of PII in public clouds for PII processors)
• ISO/IEC 27701 (PIMS) and complementary GDPR certification frameworks such as Europrivacy Seal and the overlap with the EU Cloud Code of Conduct
After completing this session, the participant will be able to:
Multi-agent systems are the future of AI. Rather than building monolithic AI models, multi-agent systems distribute capabilities across specialized modules—for example, separate agents for computer vision, language, planning, API automation, etc. By combining multiple AI models, data sources and technologies, multi-agent systems execute complex goals without requiring general artificial intelligence. To build and run multi-agent systems at scale, organizations need an intelligent operating system. This OS should provide a visual workflow builder for non-experts to develop agents, scalable runtimes for deploying agents, data pipelines for sharing information, monitoring tools for governance, and APIs for external integrations.
With the right OS, enterprises can transform operations by embedding AI agents into processes. Use cases span supply chain, IT, customer service, finance, manufacturing, healthcare and more. The modular approach allows starting small and expanding. Multi-agent systems do require thoughtful change management. New levels of automation can disrupt people and processes. Recommended practices include starting with narrowly defined pilots, involving stakeholders early, setting appropriate expectations, and proactive communication. By distributing capabilities across specialized agents, multi-agent systems represent the most effective and ethical approach to implementing AI. With an enterprise-grade OS, any organization can build the AI workforce of the future, one intelligent agent at a time. This pragmatic approach captures tremendous value without requiring human-level artificial general intelligence.
After completing this session, the participant will be able to:
This session delves into the pivotal role played by the SEC cyber disclosure rule in corporate governance of cyber risk. It commences with an exploration of established methods for ascertaining financial materiality and examining their inherent limitations. Subsequently, the presentation delves into the intricacies of computing cyber materiality thresholds, informed by the presenter's original research in this domain. Drawing on comprehensive industry insights, the presenters scrutinize prominent cybersecurity breaches and retroactively benchmark them against conventional financial materiality standards. The results underscore the necessity of a shallow threshold, as low as one basis point, for a comprehensive evaluation of cyber materiality. This finding is the bedrock for the presenter's proposition of introducing a quantitative dimension to complement traditional qualitative materiality approaches. Moreover, attendees gain insights into the Cyber Materiality Heuristic process, offering a practical exemplar for organizations embarking on the development of their materiality frameworks.
The session takes an expansive approach by illustrating how materiality thresholds can be seamlessly integrated into a firm's Enterprise Risk Management (ERM) governance processes. Furthermore, the presentation scrutinizes recent 8-K and 10-K/20-F SEC filings from organizations affected by the new regulatory landscape, evaluating their transparency and the adequacy of their disclosures from a quantitative perspective. Finally, the session underscores how these newfound tools can be harnessed to make informed strategic decisions regarding risk transference and mitigation. As a takeaway, attendees depart equipped with tangible tools and methodologies for computing materiality, thereby enhancing their ability to govern their cyber.
After completing this session, the participant will be able to:
In the current Artificial Intelligence (AI) transformation era, professionals must grasp the fundamentals of AI and its associated risks to foster digital trust. In this talk, we will journey through the AI landscape, exploring the basics, identifying key risks, and learning how to establish trust in AI systems. We will also delve into emerging regulatory frameworks, practical control mechanisms, and the potential of AI to enhance digital trust. Starting with an AI introduction, attendees will gain a shared high-level understanding, exploring core concepts and terminology. The discussion delves into key AI risks using real-world examples, highlighting issues like bias, privacy, security, and ethics that require diligent mitigation.
The session then explores essential responsible AI principles, emphasizing fairness, transparency, accountability, and ethics for building trustworthy AI systems. By analyzing AI regulations, the session illustrates frameworks like the European AI Act, showcasing their impact on developers and users and promoting responsible AI adoption. Practical insights are provided into embedding control measures throughout the AI lifecycle to mitigate risks, ensuring secure, ethical, and regulatory-compliant AI systems. Lastly, the session considers AI's role in enhancing digital trust, showcasing applications like anomaly detection, natural language processing, and behavioral analysis in strengthening security and privacy.
After completing this session, the participant will be able to:
As a series of tsunamis continue to crash into the tech labor force, what does this mean for professionals forging careers in cybersecurity, audit, risk, governance and privacy?
One colossal wave is the proliferation of fast-moving, high-momentum technologies that rapidly and radically redefine competitiveness and alter the securing of new business models. Mounting pressures on employers to restructure their workforces are creating brand new job and career opportunities in security and substantially change what it takes to be a tech professional. Another massive wave is the so-called Great Workforce Reshuffling that took root during the COVID-19 pandemic (changing how, when, where, and with whom we work) and is now becoming normalized. This trend has significantly altered how tech professionals are managed and paid, how their performance is evaluated, and how security is being strategized and operationalized. It’s also altered the demand balance between hard and soft skills and reordered the composition of work teams.
This session will draw from Foote Partners’ unique research base (4,300 employers) to challenge conventional thinking about how the unfolding Future of Work will play out in the security workforce in the next 5-10 years. Valuable insights will be provided about new job and career prospects driven by the powerful integration of emerging technologies and workforce transitions. AI Revolution(Applied AI, ML, Generative AI); Compute/Connectivity(Cloud/Edge, IR); Digital Future-Building(Next-Gen SW development, Trust Architecture/Digital Identity); Cutting-Edge Engineering(Mobility, Bioengineering, SpaceTech); and Sustainability(Electrification/Renewables, ClimateTech).
After completing this session, the participant will be able to:
Even champions need a break sometimes. Whether you have some downtime between sessions or need a few minutes of peace and quiet to catch up on your work, if you are an ISACA Platinum and Gold Member, our Loyalty Lounge offers the perfect respite to help you stay refreshed and focused.
This presentation will discuss the latest NIST CSF v2.0, which was updated to deal with increasing threats to critical infrastructure and organizations of all types and sizes. This government-authored cybersecurity framework is the gold standard for cybersecurity defense, and everyone who upholds digital trust can benefit from understanding and implementing it. Ironically, it is only as ironclad as the level of understanding of relevant stakeholders within the enterprise, most of whom are probably ISACA members or hold ISACA certifications. Hence, this session will also talk about how key enterprise stakeholders can adopt and integrate this cyber security framework into their governance, risk and compliance practices, boost enterprise-wide digital trust and ultimately elevate the value of IT audit, risk and compliance professionals within the organization.
The recent cyber security incidents and ransomware attacks that hit the US hospitality and medical care industries are consequential for anyone who works to uphold the level of digital trust in this country. That and the advent of tools that can give malicious actors an edge, such as generative AI, is among the reasons behind the recently updated NIST CSF v4.0. NIST or the US National Institute of Science and Technology updated their Cyber Security Framework. The new framework contains several cyber security requirements that can scale to mitigate these threats and boost digital trust. For instance, it added the Governance function to the other five, as well as expanded its scope to provide cybersecurity for all organizations beyond critical infrastructure.
After completing this session, the participant will be able to:
For every organization, there is a delicate balance to strike between innovation and risk — one that informs every interaction between your employees, customers, key stakeholders, and the supply chain. We’d be hard-pressed to name a more seismic innovation than the explosion of artificial intelligence (AI), particularly generative AI (GAI) with its ability to generate highly realistic and contextually accurate outputs dynamically. Scaling alongside this innovation are the risks, whether they are preexisting risks such as the proliferation of disinformation or emerging risks such as AI “hallucinations,” the leakage of sensitive information or inference attacks.
Obviously, there’s no going back to the days before large language models (LLMs), such as OpenAI’s GPT-4, broke into mainstream public consciousness. The AI genie is out of the bottle. The question we now face is, how do we move forward responsibly? The best answer: proactively building into your organization’s AI ethics model a focus on governance, risk mitigation, and compliance.
In this talk, we will discuss considerations while establishing an AI Governance Framework that balances.
After completing this session, the participant will be able to:
With the lines between traditional IT and industrial control systems blurring, the addition of Internet of Things technology means IT auditors need to focus on SCADA and related technologies in the light of greater connectivity. This presentation will start by briefly explaining the different technologies involved, such as Manufacturing Execution systems [MES], SCADA, Human Machine interfaces [HMI] and Programmable Logic Controllers (PLCs) and then describe the different implications of these technologies from a controls perspective.
This session will cover the implications of the Internet of Things to SCADA, a hugely increasing area of exposure for factories and plants, as vendors move to remotely maintained and managed systems. This also ties in with further testing programs we have deployed in networking and the tools we have written and deployed to analyze network weaknesses at plants and factories.
In addition, the presenter will show how important it is to have control over PLCs, and accurate asset management, spares, stock control and records—this is now a vital area for security, patching and cost control. We will also show how these audits affect ESG auditing and the value an IT auditor can add.
Finally, this session covers related systems often integrated into plant networks, such as entry systems and CCTV and the security issues these systems are exposed to. This will all be based on practical experience over many years in the field, plus recent in-depth reviews of current ICS's utilized in manufacturing. An up-to-date and practical session with plenty of case studies from real systems.
After completing this session, the participant will be able to:
Do you ever feel like the answer is right in front of you? With Word to the Wise, ISACA’s quiz game, the answers will be on the tip of your tongue — just don’t shout them out! Contestants will compete on stage to solve humorous digital trust-themed word games and trivia questions. Audience members can play along, and lucky audience members will even win prizes right from their seats!
As AI advancements revolutionize business decision-making, understanding how to evaluate and mitigate AI-associated risk is top of mind for all companies. In this session, participants will learn practical ways to scope and prioritize where to start when including AI risk in their wider risk assessment methodology. Participants will learn to evaluate if AI risk is present critically, assess the potential impact and navigate the potential risks of AI-powered companies.
We will focus on the following five domains of risk:
Security: Understand key questions to assess potential security implications of how an AI solution is developed, where it is hosted and how it is accessed.
Privacy: If personal information will be used in the scope of an AI initiative, what are the additional considerations? The domain explores these considerations and how to critically evaluate any given AI system to ensure responsible AI deployment, data security, and compliance.
Availability: Some AI functionality is a bolt-on to established workflows; however, it will be the whole product for other use cases. We’ll discuss ways to identify what availability and business continuity concerns could arise if AI technology is relied on for extended periods.
Intellectual Property: When it comes to generative AI, there are several aspects to be mindful of, including whether the AI was trained on data to which legal rights were obtained and who owns the content outputted by the model. By carefully addressing IP concerns, companies can better assess and manage the potential risks of integrating AI into their business operations while safeguarding their intellectual property rights.
Quality: Employees often undergo rigorous hiring exercises and evaluations to determine if they can perform the role. They also have managers and performance expectations. Therefore, how this same paradigm plays out for artificial intelligence needs to be determined. It is critical to know if what is getting generated will work. We'll explore techniques to understand how quality in AI can be assessed and monitored.
Once we lay the groundwork for top AI risks, we’ll discuss common mitigation strategies. This will be a combination of controls you should assess for within the technology itself, in addition to activities you can implement within the business processes when AI is relied upon to avoid these top risks. In closing, we will dispel some common misconceptions about Generative AI and LLMs, reinforcing the need for informed and thoughtful use of AI. Participants will better grasp the difference between perceived risks and genuine threats to their company and leave empowered to navigate AI implementation and vendor relationships with greater resilience and efficacy. This session is ideal for risk, security, and privacy professionals, but decision-makers at firms leveraging AI will also benefit. The ultimate goal of this session is to ensure that participants comprehend the potential nuances of AI usage, enabling them to make informed decisions around AI and steer clear of the pitfalls often associated with its incorporation into business activities.
After completing this session, the participant will be able to:
Applied Data Management for Privacy, Security and Digital Trust provides information to enable privacy and governance professionals to position enterprise data management programs to support privacy and security programs. This book describes the components and practices to develop an effective data management program, addresses the challenges enterprises face when incorporating data management into privacy and security, and includes an example use case scenario. The primary audience of Applied Data Management for Privacy, Security and Digital Trust has privacy professionals and data governance professionals intending to support privacy efforts.
Technology permeates today's enterprise ecosystem, and the basic customer concerns of data availability and data quality must be addressed to ensure that the enterprise concerns of privacy, security, regulatory compliance and profit are met. Cybersecurity, privacy, data integrity, data governance and regulatory compliance are among the top 10 risk concerns identified by IT audit teams. A lack of security creates vulnerability risk. Not addressing privacy creates noncompliance risk and privacy harms to individuals, increasing the probability of loss of revenue, reputation, and trust.
The purpose of executing data management well is to support privacy and security, ultimately supporting digital trust. Digital trust is the confidence in the integrity of the relationships, interactions, and transactions among providers and consumers within an associated digital ecosystem. This includes the ability of people, organizations, processes, information and technology to create and maintain a trustworthy digital world. Digital trust requires strong privacy and security programs that are supported by effective data management practices.
After completing this session, the participant will be able to:
The auditing world stands at the cusp of a transformative era, with the emergence of Large Language Models (LLMs) like ChatGPT. This session comprehensively explores LLMs, from their history to their multifaceted functionalities. We begin with a LLM primer, covering their evolution, working mechanisms, and notable variations like Bard, Llama2, and Co-pilot. The session emphasizes the versatility of LLMs, highlighting that they aren't confined to mere text interactions but hold immense potential in data analysis. Building on this foundational understanding, participants will be introduced to the diverse prompt formats available, ranging from zero-shot prompting to complex data analysis. Real-time exercises with ChatGPT ensure that participants gain theoretical insights and practical, hands-on experience.
The heart of the session lies in the exploration of the audit lifecycle. We investigate how LLMs can seamlessly integrate into various audit stages, from planning and execution to reporting and continuous improvement. Participants will witness practical examples, like using LLMs for scoping, testing, work program documentation, and even simulating interview analysis. However, as with any powerful tool, LLMs come with challenges. The session concludes with a critical examination of the risks posed by LLMs, including data privacy concerns, hallucination, and prompt hacking. To ensure responsible and secure usage, we also discuss potential controls and safeguards. This session is a holistic exploration of the intersection of AI and auditing.
After completing this session, the participant will be able to:
Do you have a plan for your Digital Trust career? Not long ago, the choices were limited, falling into two major buckets: security and audit. Today, Digital Trust careers are staggering in their variety. Advances such as generative AI and ambient computing promise to open even more avenues for professional exploration. But how do you choose, and how do you get where you want to go?
This discussion will help participants plot a course to develop the Digital Trust career of their dreams. From picturing your success as you receive your future award for contributions to the profession, we will work backward to the steps you can take in the short and medium term to help you achieve success. What experiences do you need to have? What do you need to learn? What certifications should you work toward? Each participant's answers will differ, but this session will help them start the process.
After completing this session, the participant will be able to:
To address the concerns and the growing demand for internationally accepted guardrails and safeguards around the use and development of AI, ISO and IEC have created the ISO/IEC 42001 (ISO 42001) standard, which provides a certifiable AI management system (AIMS) framework which specifies the requirements for establishing, implementing, maintaining and continually improving an AIMS for organizations who are looking to implement AI in a safe and transparent manner. The AIMS will utilize a risk based approach with controls and guidance to implement organizational and technical measures to mitigate risk, including system level controls. Legislators and ISO have both never been great at keeping pace with the developments in technology, but the game-changing nature and widespread / exponential growth in the use of AI has prompted a lot of attention in the areas of regulation, governance and compliance. Many people are suspicious of AI and the objective of ISO 42001 is that the critical issues that AI faces in the areas of trust, ethics and social concerns are addressed as the technology continues to progress and spread its influence. AI must be reliable, fair and transparent–all promoting overall trustworthiness. ISO 42001 will provide a model for accountability and governance, rooted in how organizations approach risk management and responsible use of AI with the objective that AI system should be trustworthy along multiple dimensions including fairness, accountability, transparency, reliability, robustness, safety, privacy, security and accessibility.
After completing this session, the participant will be able to:
In an era where Generative AI is pushing the boundaries, the need for robust governance has never been more critical. As IT leaders and management professionals, understanding how to manage Generative AI systems effectively is paramount.
This presentation delves into the intricate Generative AI ecosystem and explores the essential elements of a comprehensive governance framework tailored to this transformative technology. We will explore:
Join us to explore how a well-structured Governance Framework can empower your organization to harness the potential of AI while maintaining ethics, transparency, and compliance. Together, we'll journey to navigate the challenges and opportunities that Generative AI presents to IT governance professionals.
After completing this session, the participant will be able to:
Ransomware groups invest a lot of time, money, data, and technology in their business. They are no longer lone wolves, traditional threat actors working in dark, dingy basements. They collaborate with other threat actors, develop in-house solutions, and create customer success teams. Defenders are cyber-crime startups building billion-dollar empires.
After completing this session, the participant will be able to:
Organizations must examine risk through the lens of our dire talent retention issues. Organizations have control over retaining talent, and yet the statistics are horrifying. Cybersecurity professionals are not happy with their current employment and move jobs regularly. Talent retention controls seem greatly necessary, being that organizations are not following best practices for retaining and or hiring cybersecurity professionals. This negligence puts an organization in a higher risk bracket, and therefore compliance control is needed greatly.
After completing this session, the participant will be able to:
The concept of privacy online feels like a joke right now - users have become the product. GenAI can make this problem much worse if not used responsibly. For example, imagine a model that can get to know the most intimate details of your private life - your insecurities, personality type, and attachment style. A model could be trained to take all of this and give the perfect persuasion pitch to convince you to buy something. GenAI could become what a casino is to a gambling addict, what cigarettes are to a smoker, and what Black Friday sales are to a shopping addict if not handled correctly.
There's good news, though. This doesn't have to be our path. We can learn from the past and restore trust and privacy online. The Internet wasn't intended to be a place to have our privacy invaded; it became that way. That being said, a new generation of search tools puts ethics and trust front and center, pledging not to take ads or share data. GenAI has the potential to have a transformative, positive impact on the way we learn, how we communicate, how much time we have, and what we get to work on. In this session, we'll explore what it will take to leverage GenAI for good and ensure it doesn't exacerbate existing privacy issues.
After completing this session, the participant will be able to:
In a SOC 2 examination that addresses the confidentiality criteria, the system boundaries would cover, at a minimum, all system components as they relate to the confidential information life cycle: the collection, retention, use, disclosure, and disposal or anonymization of information by well-defined processes and informal ad hoc procedures. In SOC 2, "confidentiality” applies to various types of sensitive information which varies from organization to organization, but often includes nonpublic information such as the following: regulatory compliance information; financial information used for both internal and external reporting purposes; confidential sales information, including customer lists; confidential wholesale pricing information and order information; confidential product information including product specifications, new design ideas, and branding strategies; and proprietary information provided by business partners, including manufacturing data, sales and pricing information, and licensed designs. Sensitive information also includes personal information. Personal information is nonpublic information about or related to an identifiable individual, such as personal health information or personally identifiable information (such as personnel records, payment card information, and online retail customer profile information).
In this session, we will discuss ways a service auditor might approach a SOC 2 examination that addresses the Confidentiality Trust Services Criteria, including best practices for evaluating the confidential information life cycle and defining the boundaries of the System.
After completing this session, the participant will be able to:
When asked, most organizations still need a complete list of their vendors. They need to be made aware if those vendors are maintaining the same level of security and privacy controls required of the organization to meet legal, regulatory, and contractual obligations. There needs to be consideration or desire to know what data elements are shared, where it is shared, or if there is legal justification to share the data.
Most existing privacy laws only require organizations to disclose categories of data; however, it is important to know the data elements too and always received pushback. Often, for the rational being, it is hard enough to manage categories of data and compliance obligations; it is impossible to manage to the data element layer. While that may be true, the fact is, laws require it and most are unaware of this requirement. Colorado Student Privacy Law requires all third parties to disclose the data elements obtained, whether it was provided to the third party or added to the record by the vendor, and this information must be publicly available to parents and caregivers. Suppose an organization with student data, such as a school, does not perform adequate security and privacy due diligence of the third party before engaging into a contract. In that case, the third party may lose the ability to do business in the entire state. That is huge and could significantly impact organizational growth and revenue. This level of detail is needed to meet other privacy requirements such as records of processing activities, data flows, inventories, privacy notices and much more.
Inadequate third-party risk management leads to increased breaches, data loss, more laws, and executive orders. This session will present a third-party risk management program driven by compliance, privacy, and security that organizations can use entirely or use components to strengthen their existing program.
After completing this session, the participant will be able to:
Celebrate those who exemplify the best of the IS/IT industry by attending the ISACA Awards. Mingle with winners and network with industry leaders, all in a friendly atmosphere.
All registrations come with an Awards and Social Event ticket, to purchase a guest ticket, please visit here
Artificial Intelligence and Machine Learning, Digital Trust, and Smarter Devices are just a few of the technology trends. As we continue to move along the Digital Transformation spectrum, join us for a panel discussion with the Chiefs: a Chief Risk Officer (CRO), Chief Audit Executive (CAE) and two Chief Executive Officers (CEOs). These four experts will discuss some of the most prevalent emerging technologies, their impact on the business community, and best practices on how to assess and manage them.
Risk became a 4-letter word during the pandemic, but as ambitious professionals and organizations hungry to meet our potential, we need to take risks at work.
Risk-taking sounds good in theory, but most of us need help moving from wishful thinking to execution. Why? Because when we have a chance to take a risk in the moment, we feel awkward—and that awkwardness bumps up against our deep desire for others to approve of us and what we do. As it turns out, what we improve in the face of taking risks—often despite others’ judgment—leads to the greatest growth. After all, the fastest path to major improvement comes from strengthening what’s weak rather than what’s strong.
Join 2x TEDx Speaker, Executive Coach, and Workplace Performance Expert Henna Pryor and learn why conditioning for awkwardness is your secret weapon for strengthening your risk-taking muscle—and how to do it right.
After completing this session, the participant will be able to: