I recently did a Q&A for the Globe and Mail about how artificial intelligence is changing the nature of cybersecurity, both for better and for worse. Like blockchains, AI has captured the zeitgeist and suddenly people who like to think about hammers are seeing nails almost everywhere.
Deep Instinct co-founder and Chief Technology Officer Eli David, for example, recently claimed that AI’s near-future “will reach near human or superhuman capabilities.” It’s a great marketing approach because it’s so broadly defined. Almost any advance lets you raise the “mission accomplished” banner over the proverbial aircraft carrier control tower. And it’s true that we’re beginning to see some incredible advances, including AlphaGo’s recent triumph over world Go champions. Gary Kasparov (the former world chess champion famously defeated by IBMs Deep Blue in 1996-97) gave a great TED talk where he contemplates the amazing possibilities of a man-machine partnerships. What kind of epic chess matches are possible when two human players face off with their own AI assistants?
But while it seems like our lives will be inevitably be transformed by AI, the hype over recent advances have also lead to some unfortunate and over-heated rhetoric as of late. As my friend Dan Shapiro points out, AI is not magic. For the most part it’s not even really intelligent. Like a hammer it is only a tool: it does what you make it do. I’m reminded of a quote from Conan the Barbarian: “What is steel compared to the hand that wields it?”
I teach a 3rd year course on the theory of computation, and one of the direct applications from that subject to this issue is the Church-Turing Thesis and the realization that whatever an AI system can do, a human can also do given enough time, paper and pencils. Seen in that light AI is not so mysterious. Like a graphing calculator, all it can do is out-race a human in a computation they could have performed for themselves anyway. This should in no way trivialize AI’s achievements, or what it can do for our lives. Only contextualize what it is, and more importantly what it isn’t.
In terms of AI in cybersecurity there seems to be some interesting opportunities for both black- and white-hat applications, most of which ultimately seek to automate and accelerate jobs humans would be doing anyway, from composing compelling phishing emails, fuzz testing, code review, etc. See the Q&A for further discussion.
Read the AI/Cybersecurity Q&A in The Globe and Mail