2022-10-29, 17:30–17:50, Track 1
Attributing a new campaign or malware to a known group is not an exact science. The skills it requires and the considerations surrounding it aren't given nearly as much importance as the technical aspects of malware analysis in training and discussions. Yet, it is often the part that will garner the most attention from journalists and the general public. Proper attribution can add great value to a report; helping organizations relate new activity to their threat model and providing researchers and law enforcement with the means to link clusters of activity. When done wrong, however, it can undermine the credibility of the field and generate undue alarm. Since researchers base their attribution on available material, incorrect links can lead future efforts astray and create lasting confusion.
In this presentation, we will first explain how we do attribution using technical artifacts -- such as code similarity and tool reuse --, infrastructure, TTPs, and socio-political factors like victimology. We will use concrete examples from previous research to illustrate how these indicators can be used, or misused, to cluster activity. We will discuss the relative merits and reliability of these indicators along with how they can be combined to arrive at a more accurate conclusion.
As we go along, we'll cover the pitfalls associated with each of them, with examples of how we can get it wrong. We'll also bring up other obstacles encountered when doing attribution including the varying definitions of certain groups between various researchers, along with tool sharing and so-called "umbrella groups" that encapsulate multiple sub-groups.
The presentation will conclude with a discussion of the importance of documenting the reasons and confidence level associated with such claims. We will briefly touch on the larger ethical and social considerations that surround this issue to encourage researchers to be rigorous when attributing threats and evaluating claims from external reporting.