In policing, some of our best innovations come from practical necessity. We see a problem, we adapt, and we act. That’s exactly what happened when Project Vigilant was developed—an intelligence-led, proactive approach to tackling predatory behaviours in the Night-Time Economy (NTE). The idea is simple: don’t wait for a serious sexual offence to occur. Instead, identify, disrupt, and manage the behaviours that signal predatory intent before they escalate.
But despite the momentum, there’s something missing: evaluation. We know the approach makes sense. We see it working on the ground. Yet we lack the systematic evidence to prove its effectiveness and understand its true impact. That’s what this article is about—why we need to evaluate targeted policing of VAWG perpetrators and how we might go about it.
The Rise of Perpetrator-Focused Policing
For too long, policing responses to violence against women and girls (VAWG) have focused on victim safety rather than offender behaviour. Project Vigilant flips that logic. Instead of telling women to modify their behaviour, it targets those who cause harm—men who display predatory behaviour in the NTE.
When the first Vigilant patrols were deployed in Oxford, the focus was on tackling a serial ‘creeper burglar’—a man whose offending escalated from stalking and trespassing to sexual assault and rape. But what officers quickly realised was that he wasn’t alone. Multiple men were observed loitering, stalking, harassing women, and ignoring clear rejections. Some had prior convictions for rape and serious sexual assault. Yet, until that moment, they weren’t on police radars.
With funding, training, and a structured risk-assessment process, Project Vigilant was expanded across Thames Valley Police and is now influencing national practice. The approach combines plain-clothes officers identifying offenders with uniformed officers intervening—issuing a ‘stop and account’ to challenge behaviour. Those identified are then risk-assessed and managed, with responses ranging from education to active monitoring and enforcement.
This is promising, but we still need to ask: does it actually work?
What Are We Trying to Achieve?
Before we talk about evaluation, we need to be clear on objectives. A proactive policing model like this isn’t just about enforcement; it’s about changing behaviour and reducing harm. That means we need to assess multiple potential outcomes, such as:
Crime Prevention – Does targeting perpetrators lead to a reduction in serious sexual offences?
Disruption of Predatory Behaviour – Does visible intervention deter offenders from engaging in harassment and stalking?
Repeat Offending Rates – Do those identified through Project Vigilant reappear in the NTE, or does intervention reduce their presence?
Public Perceptions of Safety – Do women feel safer in NTE areas where proactive policing is in place?
Operational Impact – What does this approach cost in terms of policing resources, and is it sustainable?
Each of these questions requires evidence—not just anecdotal reports, but robust evaluation.
How Do We Measure Success?
Policing is often unconsciously or consciously reluctant to invest in evaluation because evaluation seen as complex and time-consuming or thought about late in the planning process. At the same time, Evidence-based Policing is too often associated with randomised controlled trials and that can be off-putting due to time, complexity and fears about ethics. But evaluation doesn’t have to be time-consuming or costly. My journey with Vigilant has given me plenty of food for thought about practical evaluation. Here is a popular tactic that resonates with police and politician’s alike, but aside from some exploratory description of statistics we don’t yet know about the effects Vigilant tactics have on the people exposed. Are they deterred from crime? Does the contact have an impact on their perception of confidence in policing? These questions require answers if we are to scale Vigilant. Here’s how we could practically assess Project Vigilant initiatives.
1. Descriptive Data Analysis
We already collect data on who is stopped, their risk level, and their behaviours. A deeper analysis of this dataset could provide insights into patterns and characteristics of our target population, for example, from my own research:

This ‘simple’ description of the home address of Vigilant subjects helps us to shape our follow up service provisions. It also outlines the scale of the threat posed by travelling criminality and emphasises the importance of sources like PND in our toolkit. This, of course is not evaluation – it doesn’t answer the questions posed above, but perspectives like this do give us a baseline from which to begin making those comparisons.
2. Before-and-After Comparisons
Using recorded crime data, we could compare sexual offence rates in NTE areas before and after Vigilant patrols were introduced. While this isn’t a perfect method (many factors influence crime rates), it gives an indication of impact. Similarly, we might look at an individual level at the people stopped by Vigilant officers and trace differences in their contact before and after. This is likely be limited by low baseline levels of crime in this group of suspects. My research of 378 Vigilant subjects we encountered in Thames Valley showed that we only had prior records of 35% of them. We might balance this by looking at follow up prevalence (the proportion who go on to commit any kind of sexual offence) but by accepted standard, we cannot claim that to be related to Vigilant.
3. Quasi-Experimental Designs
For a more structured approach, we could identify comparison sites—similar NTE locations without proactive patrols—and compare outcomes. This would help isolate the effect of perpetrator-focused policing. Doing this with individuals may prove more problematic because it is not entirely clear how we go about identifying a comparison group. We might look at the usual suspects (age, ethnicity) but these are far from precise enough to account for underlying reasons. This is where strong causal evaluation at an individual level becomes difficult because we don’t know what we don’t know. There may be a perfect comparison group waiting in the nighttime economy, but unless they are stopped by Vigilant officers, we cannot document them, and by stopping them we have made them ineligible to be our comparison group.
4. Longitudinal Tracking
Tracking individuals stopped under Project Vigilant over time could reveal whether interventions alter behaviour or if repeat offending continues. Are offenders flagged in other policing contexts later? Do they receive further safeguarding interventions? If we did this at such a volume, across multiple Vigilant sites and with consistency of data recording, my boffin friends tell me that statistical modelling might be able to provide some insights into what is most related to future offending behaviour. This is unlikely to tell us whether Vigilant is better/worse than business as usual, but it could help us refine the practices of risk assessment and follow up. We need to organise ourselves to collect the right data though. There are a host of things we need to understand about what happens to individuals after they are stopped and about them as individuals. A “Vigilant data template” sounds dry, but it’s not a bad idea.
5. Randomisation?
Randomisation doesn’t have to be a choice between Vigilant or nothing. If we consider Vigilant as a screening tool and think about randomising what we do after a person has been identified, then we have a path to a comparison group. The question is how to do this in an ethical way. Everyone would agree that we need to do something, so our decision becomes more about what we do for different groups, and even, is this an important thing to evaluate?
Ethical Considerations: Balancing Rights and Risk
Targeting potential offenders before they commit a crime raises ethical questions. Some critics argue that stopping individuals before an offence is committed amounts to profiling. This is a valid concern, and any evaluation should explore:
False positives – Are innocent people being unnecessarily targeted?
Disproportionality – Are certain groups over-represented in stops?
Legal frameworks – Is this approach aligned with human rights legislation?
Good policing balances proactive risk management with individual rights. Evaluating Project Vigilant isn’t just about proving effectiveness—it’s about ensuring it’s done right. In this respect, I argue that it is important to describe and evaluate what happens with Vigilant subjects. We do need to know what follow up tactics have the best effect after identification, if we are to navigate these issues.
Time to Act: The Case for Independent Evaluation
The policing profession has a habit of rolling out new practices without properly evaluating them. Project Vigilant is promising, but if we don’t step back and assess its impact, we risk missing an opportunity to refine and scale what works—or correct what doesn’t.
We need independent evaluation, supported by police forces, government, and research partners. Without one, we’re flying blind.
If you’re a practitioner, academic, or policymaker interested in this work, let’s start the conversation. How do we measure success? What data do we need? How can we refine this approach to make women and girls safer while maintaining ethical policing?
Project Vigilant showed us that targeting perpetrators is possible. Now, let’s make sure we’re doing it effectively.
ABOUT THE AUTHOR
Tina Wallace KPM is the tactical lead for Project Vigilant, a policing initiative identifying and disrupting predatory behaviour. With 27 years in covert operations, surveillance, and behavioural detection, she champions proactive, evidence-based policing. Committed to intelligence-led interventions, Tina works to prevent violence against women and girls through targeted disruption of offenders.
GO FURTHER
Watch our Easier Said Than Done episode with Tina and colleagues discussing how to implement Project Vigilant.
Read more about Tina's research in Going Equipped.