Karel Hynek | FIT CTU- Czech Technical University, Prague

Supported by CESNET

Karel Hynek
Karel Hynek

Karel is a PhD student at FIT CTU in Prague, and a full-time researcher at CESNET a.l.e., the Czech national research and education network. He is very active in research areas concerning network security, focusing on high-speed network monitoring and ISP-level security protection. The outcomes of his research are network classifiers, detectors and data exporters which are deployed to protect production ISP monitoring infrastructure.

Karel has participated in multiple national and international projects, and is the author of numerous awarded research papers published at scientific conferences. Moreover, he received the prestigious award of Stanislav Hanzel for talented students.

Lightning Talk | TNC23, Tirana, Albania

BREAKING DOWN AI TO GET EXPLANATIONS

Research shows that Artificial Intelligence (AI) can be an effective tool for automated cyber threat hunting. However, the lack of explainability still prevents its mass deployment into commercial-grade tools. Due to AI complexity it's hard to get reasons behind its output, which are crucial for cyber security incident handling and response. We thus bring new design principles of AI by dividing complex monolithic models into small components with defined functionality. By observing the outputs and interaction of these components, we can get an insight into their internal behaviour and provide explanations and reasoning behind their output. The lack of explainability originates from AI's enormous complexity, which people cannot comprehend. We thus divide the complex AI into a component-based system, each component designed separately, like in the human brain. In the brain, there are parts responsible for vision and other parts for hearing for example. We know which part is responsible for what, dramatically increasing our understanding of such a complex system.

Component-based models are solutions for AI model deployment in cyber security and other high-stakes areas where explanations are necessary. The division into components provides better control over the predictions and explains them. The improved explainability also reduces the chance of design errors, can improve model quality control and increases the overall trust of its users. Moreover, we can use reasoning to filter out obvious mistakes and increase AI reliability.

Skip to content