Summary: Phil Venables published one post every two weeks in 2024, covering various topics in technology and risk management. His top posts focused on security training, risk appetite, and the complexities of cyber risk quantification. Looking ahead to 2025, he plans to explore AI risks, emerging threats, and the interplay of security, privacy, and compliance.
A reminder that while we correctly focus on the immediate risks of generative AI we also need to look at second order effects - the risks that come from what comes next. The bottom line is we should be appropriately cautious about AI but not so that we forgo the truly massive upside that the bold but responsible use of this technology will give us in a range of fields. (View Highlight)
Research and practical tactical advances are needed across “zero trust”, service meshes, higher assurance trustworthy computing, default encryption, sandboxing / enclaves, hardware assisted security, policy languages and security integration into software development, testing and deployment management. (View Highlight)
many organizations have framed security incentives poorly. It has been positioned as loss avoidance, regulatory compliance, brand protection and return on security investment by saving “soft dollars” that don’t actually generate the stated return. (View Highlight)