RSA Peer-to-Peer (P2P) sessions are some of the hidden gems that too many RSA attendees overlook in the organized chaos that is the world’s largest annual security conference. I’ve had the opportunity to facilitate several P2P sessions at recent RSA conferences including last year’s session titled “Practical Applications of AI in Security: Success Stories from the Field”. The impetus for this session came to me after acknowledging that AI had gained full-blown buzzword status in early 2018. In what was a slightly selfish motive for the session, I wanted to cut through the hype and find out what certain organizations were doing to capture the promise that AI and machine learning held.
I wasn’t disappointed; in fact, the session was standing room only and numerous attendees were turned away. In the 45-minute discussion that followed, several keys point came up, including:
- Few of the peers have internal AI projects underway(however, given there were only 25 peers in the P2P session, the group might have not been representative of the security community at large).
- The reference projects those had were early-stage – several peers had very early stage AI projects but most attendees were interested in learning more about AI embedded in vendor products.
- Most peers were unable to characterize vendor claims – the surprising takeaway from the session was how the majority of attendees felt powerless to interpret vendor claims involving AI and machine learning.
Given the outcomes of last year’s session, I couldn’t resist following up on the thought of vendor claims as more and more vendors rolled out their own messaging surrounding AI and machine learning. I was delighted to find out that my session, “Vetting Vendor Artificial Intelligence Claims: Separating Fact from Fiction,” got accepted for the upcoming 2019 RSA conference in early March. The session sets out to cover the following territory:
How are security professionals validating vendor claims involving artificial intelligence in security products? Do organizations have to have a data scientist onboard to interpret machine learning and AI features claims? This session will enable non-data scientists and AI novices to better understand and interpret security product vendors statements involving machine learning and AI.
To say I’m looking forward to the session is an understatement. At a minimum it should be a lively discussion as, by Wednesday at 10:40am, most attendees will have had an opportunity to walk through the halls in the Moscone center and soak up countless sales pitches involving AI and machine learning. In preparation for the session, I am designing my facilitator strategy, and as I do that, I am reaching out to key security leaders who have insight into these challenges. Their feedback will help to guide our discussion. My initial questions include:
- How are buyers distinguishing AI capabilities from hard coded rule sets?
- Who within their organization is taking the lead on vetting vendor claims?
- How are buyers able to independently quantify the % improvement that AI represents.
If you have thoughts or input, feel free to reach out to me DM me on Twitter at @johnbdickson or respond to me via LinkedIn Message.